Tuesday, December 17, 2024

What should a NERC CIP vulnerability management requirement look like?


In October, I wrote this post titled “NERC CIP needs vulnerability management, not patch management”. That post made the argument – which I’ve made before – that compliance with Requirement CIP-007 R2 is hugely expensive relative to the security benefit it provides. My most important evidence for this was the fact that CIP-007 R2 requires the NERC entity, every 35 days, to identify every security patch that has been issued for each version of each software or firmware product installed within their ESP, that has been issued in the previous 35 days. It doesn’t matter whether the vulnerability or vulnerabilities mitigated by the patch pose high, medium or zero risk; the patch needs to be evaluated and applied within 35 days. If it can’t be applied in that time, the Responsible Entity must develop and implement a mitigation plan for the vulnerabilities addressed in the plan.

When the original patch management requirement CIP-007-1 R3 appeared with NERC CIP version 1 in 2008, the requirement wasn’t spelled out in such exquisite detail, but it was very similar: The NERC entity needed to “establish and document a security patch management program for tracking, evaluating, testing, and installing applicable cyber security software patches for all Cyber Assets within the Electronic Security Perimeter(s).” As is the case with the current version, there wasn’t a word about the degree of risk posed by the vulnerability addressed in the patch.

However, in 2008 this wasn’t as big a problem as it is today, since far fewer vulnerabilities were identified at that time; therefore, far fewer patches were issued. The first vulnerabilities identified with a CVE were reported in 1999; a little more than 300 were identified in that year. Even 15 years later in 2014, only 7,928 CVEs had been reported in total. How many CVEs have been reported as of last week? 274,095, of which 38,000 have been reported so far this year (a huge jump from last year, by the way). In other words, almost five times as many new CVEs have been reported so far in 2024 alone as were reported in total during the period 1999 – 2014.

In fact, any company that tries today to apply every patch they receive for every one of the software products they use is sure to fail; yet that is exactly what CIP-007 R2 requires. How do organizations not subject to CIP compliance keep from being overwhelmed by patches? They must prioritize them. Of course, it’s best to prioritize them by the degree of risk posed by the vulnerability or vulnerabilities that are mitigated by each patch. What’s the best way to do that?

A lot of organizations prioritize vulnerabilities based on CVSS score; in fact, that used to be considered the best way to do it. However, CVSS doesn’t measure risk; it measures severity of impact of the vulnerability – and even that is very hard to measure in a single score, since the severity of impact will vary greatly by the importance of the affected system, how well the network is protected, etc.

There is now a growing consensus in the software security community that the best measure of the risk posed by a vulnerability is whether it is currently being exploited by attackers. There’s a big reason for using that measure: Only about six percent of CVEs are ever exploited in the wild.[i]

Of course, this means that, in retrospect, an organization that tries to apply every patch will be wasting at least 80-90% of the time and effort they spend in patching. While it’s not possible to reduce this wasted time to zero, it’s certainly possible to reduce it substantially by prioritizing patch application for vulnerabilities in CISA’s Known Exploited Vulnerabilities catalog or values close to 1.0 in FIRST’s EPSS score[ii] - as well as using other measures like the impact if the system is compromised.

However, there’s one type of organization that’s unable to prioritize patch applications: that’s a NERC entity with high or medium impact systems.[iii] This is the most important reason why the CIP-007 R2 patch management requirement needs to be replaced with a risk-based vulnerability management requirement. What would the new CIP-007 R2 look like? Very roughly, I think it should require the Responsible Entity to develop and implement a vulnerability management plan to:

1.      Identify new vulnerabilities that apply to software and firmware products installed on medium or high impact BES Cyber Systems, EACMS, PACS or PCAs that they operate.

2.      Assign a risk score to each vulnerability, based on criteria identified in the plan.

3.      Identify a score that is the “do not fix” threshold. The NERC entity will normally not apply a patch for a vulnerability whose score is below that threshold, although there will always be special cases in which the vulnerability needs to be patched regardless of its score.

4.      Regularly investigate security patches released by the vendors of those products.

5.      Prioritize application of those patches according to criteria listed in the plan; apply patches in order of priority, if possible.[iv]

6.      If a patch cannot be applied when it normally would be, determine what alternate mitigation(s) for the vulnerability addressed in the patch might be applied.

How will compliance with this new requirement be audited? The entity will need to provide two types of evidence:

A.     The plan itself, including a narrative explaining how the vulnerability management plan makes much more effective use of resources and mitigates more risk than would a plan to apply every applicable patch, regardless of the degree of risk mitigated.

B.     Evidence that the plan was implemented and that it significantly lowered the entity’s level of software cyber risk. This could include information such as a comparison of a group of vulnerabilities that were patched vs. a group that were not patched, along with the average risk scores of the two groups; of course, the comparison should show that the scores of the patched vulnerabilities were much higher than those of the unpatched vulnerabilities.

C.      Along with the quantitative evidence, qualitative evidence that the plan was implemented and was effective in lowering the entity’s level of software risk. For example, there could be a narrative like, “An example of the success of our plan is CVE-2024-12345. This vulnerability was on CISA’s KEV list when the patch was released. We assigned the vulnerability our highest risk score of 5 and applied the patch the day it was released. After we applied the patch, three major cyber incidents were reported in which CVE-2024-12345 was the primary attack vector.”

Note that, unlike the current CIP-007 R2, compliance with this vulnerability management requirement will not mandate that the NERC entity be able to provide evidence of every instance of compliance – e.g., evidence that the entity (or their tool) checked with every software or firmware vendor in scope every 35 days to determine whether any new security patches were available.

Even more importantly, the NERC entity will not need to provide evidence on an individual device basis, as is usually required for CIP-007 R2 compliance. Instead of having to identify individual BES Cyber Assets that were patched, the entity will identify – for example – vulnerabilities on the CISA KEV list that were patched, without having to point to individual devices.

This is especially important for one reason: Cloud service providers will never be able to provide compliance evidence on a device basis, since they’re not equipped to do that. This means that the current CIP-007 R2 (and CIP-010 R1, which also requires evidence on a device basis) will continue to be an obstacle to cloud use by NERC entities with high and medium impact BES Cyber Systems until CIP-007 R2 is changed to a risk-based (or “objectives-based”, to use NERC’s preferred term) requirement. The risk that patch management mitigates is the risk posed by unpatched vulnerabilities, so the only way that CIP-007 R2 can be made “cloud friendly” is to replace it with a vulnerability management requirement.

Of course, we can assume that all of CIP will ultimately be rewritten (or replaced) as part of the huge effort required to make use of the cloud fully “legal” for NERC entities. However, as I pointed out recently, it will likely be 6-7 years before that is accomplished. Does the power industry have no choice but to wait that long?

What I just wrote about CIP-007 R2 points to a possible interim solution, which may “only” require rewriting two requirements: CIP-007 R2 and CIP-010 R1 (configuration management). I’ve just described what needs to be done with the former requirement. As for the latter, I think CIP-010 R1 can remain a configuration management requirement, but it can be reworded so that it is no longer device-based and so that it focuses on the risk of inadvertent misconfiguration.

I think both of these requirements could be rewritten and gain approval from NERC and FERC in 3 to 3 ½ years, which is half the time required for the full “CIP in the cloud” revisions that the Project 2023-09 Risk Management for Third-Party Cloud Services Standards Drafting Team is now working on. The full set of revisions will ultimately be needed, but I believe most cloud use by NERC entities could be made “legal” just by rewriting these two requirements.

However, doing this will require a different SDT – one that is just focused on these two requirements. Besides accelerating the development of the replacements for CIP-007 R2 and CIP-010 R1, constituting a second SDT will allow the current SDT to finish their work at least a year or two earlier than 2031. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is available! For context, see this post.


[i] I owe this observation to Chris Hughes, who writes the excellent blog on software security called Resilient Cyber. I recommend you subscribe to it, if software security is a concern of yours. The post where I found this figure is here.

[ii] EPSS scores can change daily, so it is important to update them daily, if you’re basing your patch prioritization in part on EPSS.

[iii] Of course, NERC auditors are human beings; they’re unlikely to issue a Notice of Potential Violation (NPV) if an entity convinces them they don’t have the resources required to apply or mitigate every patch that’s been released for every piece of software found in their ESP. But it’s a big waste of time for a NERC entity to have to prove it. The fact is that today, an organization that does have those resources is the rare exception that proves the rule.

[iv] There may be reasons why some patches should not be applied strictly according to their assigned priority. For example, if the target systems are found in different physical location, and it will save a lot of time if all patches due to be applied in one location are applied at once.

Sunday, December 15, 2024

News from the folks at CVE.org Part II

My previous post described the set of videos that CVE.org posted recently from their annual CNA (CVE Numbering Authority) Workshop earlier this month. I said I found a number of statements very interesting, but first I wanted to introduce readers to how CVEs are identified and used and who the players are, since most readers probably only have vague ideas on these topics.

Since I wrote that introduction on Friday, I’m now ready to tell you about at least some of the nuggets of wisdom that I found (you may find it helpful to refer to the previous post if any of the terms or acronyms below aren’t familiar to you):

CVE.org and the NVD

Since the National Vulnerability Database (NVD) is where most of us learn about CVEs, we often think it is the source of CVE information. That isn’t true. CVE.org, which used to be called just “MITRE”, is the source of information on CVEs. This information is found in “CVE Records” that are available in the CVE.org database.

The CVE records are passed on from CVE.org to the NVD and are incorporated into their database, but the NVD’s CVE records don’t include all the information that is found in the same records in the CVE.org database. One of the speakers in the workshop emphasized that just looking for information about a CVE in the NVD will cause you to miss the additional information that’s available in CVE.org. If you are trying to research a CVE in depth, you need to look in both databases.

The word from Microsoft  

Lisa Olsen of Microsoft, a longtime participant in the CVE ecosystem, made a number of interesting points:

·        Microsoft usually reports 80-100 new CVEs on every Patch Tuesday.

·        For some of their products, they create CPE names and include them in the new CVE records.

·        When they create a CPE for a version of Windows, they include the build number. This is important, since builds are really the “versions” for Windows. Windows 10, 11, etc. are more like separate products, since they are constantly updated during the multiple years they’re usually available. To identify the specific codebase in which a Windows vulnerability is found, you have to know the build number.

·        Lisa pointed out that in some CVE reports, the “affected” version of the product refers to the fixed version of the product (i.e., the version to which the patch has been applied), while in other reports (usually from different CNAs), the “affected” version is the unpatched version. This is a huge difference, of course, since it means some organizations may be applying patches to versions of a product to which the patch has already been applied. Lisa said the new CVE schema will allow the CNA (which is in many cases the developer of the affected product) to indicate which case applies. However, it seems to me there should be a rule: The “affected” product is always the one in which the vulnerability has not been patched.

·        Microsoft also is going to start publishing CVE records that have a new category of CVE: one that doesn’t require customer action to resolve. I believe she meant these are vulnerabilities that have already been resolved by a means other than patching – e.g., a configuration change in the software itself; yet Microsoft believes it is still important for the user to know that the vulnerability is present in their software, even though it is not currently exploitable.

Art Manion

One of the most respected figures in the CVE “universe” is Art Manion, formerly vulnerability analysis technical manager at the CERT Coordination Center at Carnegie-Mellon and someone who continues to be very involved with CVEs as a CISA contractor. He made a number of interesting points, including:

·        The key to CVE.org’s mission is the CNAs, since they identify and report all CVEs. CVE.org greatly prefers the “federated” approach, in which the CNAs are given a lot of freedom in how they report CVEs (in fact, he noted that only three of the many fields in the CVE specification are required; the others are all optional).

·        Another way of saying the above is that using the word “must” with the CNAs won’t usually get you anywhere. The CNAs are volunteers. CVE.org is very careful about not alienating them by threatening to reject CVE reports if certain fields aren’t present, etc. There are a lot of complaints about CVE reports not including this or that field, but these concerns should always be approached with the attitude of, “Do you want a half-full glass or no glass at all?”

·        A number of industries are reporting very low numbers of vulnerabilities, including healthcare and autos. This is why I usually say the vulnerabilities you need to worry about most are the ones that have never been reported to CVE.org (as well as to other vulnerability reporting entities like ICS-CERT and GitHub Security Advisories). Of course, when these are suddenly exploited, they’re known as “zero day vulnerabilities”. However, these are often vulnerabilities that have been known for a while – but just to the developer, which never reported them.

·        It would be nice to have a VEX capability in CVE. That is, instead of saying, “Product X is affected by CVE-2024-12345”, the CVE report would effectively say, “Even though Product X contains component ABC and ABC is affected by CVE-2024-12345, Product X is not affected by that vulnerability.”

·        The new version of the PCI-DSS standards for protection of payment card data by the retail industry requires that vendors to the industry report more vulnerabilities. In other words, “We don’t believe the fact that you haven’t reported any vulnerabilities for your product really means you don’t have any. It just means you have decided to endanger your customers by not reporting them. This has to stop.”

·        It would be good to include in the CVE record whether CISA has placed that vulnerability on its Known Exploited Vulnerabilities (KEV) list. Since almost every organization has many more vulnerabilities to patch than it has hours in the day to patch them, prioritization of vulnerabilities is a key issue. Prioritizing vulnerabilities that are on the KEV list is a great strategy. I agree with Art that this information should be included in the CVE record. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book, Introduction to SBOM and VEX, is available here!

Saturday, December 14, 2024

News from the folks at CVE.org


This week, Andrey Lukashenkov of Vulners posted in LinkedIn this link to 15 videos from CVE.org’s annual CNA workshop, which was held at the beginning of December. I decided there might be one or two videos I would like to watch, but I ended up watching about half of them, plus reading the meeting notes and the session summaries.  

I won’t try to summarize the sessions that I watched. If any of the session topics interest you, I recommend you watch the video, since the meeting notes and session summaries just scratch the surface of what was said. However, I will repeat some interesting statements I heard, and I’ll explain why I found them so interesting – in my next post.

However, since I realize that many readers don’t know much about CNAs, CVEs, CPEs, etc., I’ll provide a relatively short introduction to this world-unto-itself now:

1.      You probably know that a CVE number (e.g., CVE-2024-9680) designates a vulnerability. Where do CVE’s come from (by the way, if you want to know what CVE stands for, it’s “common vulnerabilities and exposures”. But I’ve been told that today CVE is just NBI – nothing but initials)?

2.      Many people think of a CVE as something that has always been present in a software product. It is there because one of the original developers made a mistake. It is only being reported now because nobody discovered it until recently.

3.      In fact, new CVEs are discovered all the time, and they’re discovered in code that was previously thought to be completely benign. In other words, hundreds of experienced eyes previously looked at the code in question and saw nothing wrong with it. But one day, some intrepid soul realized these seemingly innocuous lines of code were in fact a vulnerability that needed to be reported to the world.

4.      Here’s a short quiz: Who do you think most vulnerabilities are reported by: a) an independent researcher, or b) the developer of the software?

5.      If you said “b”, you’re correct! Most vulnerabilities are identified and reported by the developer of the software; moreover, a small number of the largest developers report the great majority of new vulnerabilities. Far from hiding vulnerabilities, the developers are taking the lead in rooting them out and exposing them to the light of day. Of course, this isn’t to say there aren’t a lot of developers who try to hide vulnerabilities. But I suspect that any developer who does that today is setting themselves up for failure, not success. Most software users understand that vulnerabilities need to be found and fixed, not swept under the rug.

6.      CVE.org is part of the US Department of Homeland Security (DHS). CISA is also part of DHS; in fact, CVE.org’s budget is funded by CISA.

7.      CVE.org used to be called simply “MITRE”, since from its inception CVE has been a project contracted to the MITRE Corporation (in fact, MITRE created the CVE system in 1999). Today, an independent board of government and private industry representatives governs the CVE program.

8.      New CVEs are reported by public and private (mostly private) organizations that are called CVE Numbering Authorities or CNAs; today, there are over 400 CNAs. The majority of CVEs are reported by CNAs that are large software developers, including Microsoft, Oracle, Red Hat, HPE, Schneider Electric, etc. – although many smaller developers, nonprofits and governmental organizations are CNAs as well.

9.      When a person or organization that is not a CNA wishes to report a vulnerability, they can request that a CNA prepare the report for them. They need to first go to a CNA that has the organization within its scope – e.g., a country, an industry, etc. For example, any project on GitHub can report a vulnerability to GitHub, which is a CNA. If an organization can’t find a CNA that way, they can go to one of the “CNAs of Last Resort”, including CISA (for ICS) and MITRE (for everything else).

10.   The CNA creates a CVE ID (aka “CVE number”) for the vulnerability and submits the report to CVE.org, where it becomes a CVE record and is included in the CVE.org database. At this point, the CVE record does not usually contain a “CPE name”. While CPE stands for “common platform enumeration”, that phrase means very little today. For the moment, it suffices to say that CPE is the oldest (over two decades) machine-readable software identifier. Because one software product can have different common names in many different contexts, automated vulnerability management is only possible if each product has a single identifier.

11.   Now the CVE record goes to the National Vulnerability Database (NVD), which is operated by the National Institute of Standards and Technology (NIST), an agency of the US Department of Commerce. It also goes to any other database that accommodates CVE and CPE. NIST is an agency of the US Department of Commerce.

12.   Until February of 2024, an NVD contractor at this point quickly created a CVSS score, identified one or more CWEs (common weakness enumeration), and created a CPE name for any product named in the text of the CVE record. However, starting on February 12, that process broke down. As of December 2024, the NVD has added these items to fewer than half of the CVE reports that came to it during the year (about 16,000 of 37,000 total CVE records).

While there is a lot to be said about missing CVSS scores and CWEs, the primary concern of most people involved in software vulnerability management is the lack of CPE names on CVE records. This is because, even though CVE records normally include a textual description of the affected products, an automated search for vulnerabilities applicable to a particular product (the normal use case for software vulnerability databases like the NVD) can only identify the product if it has a machine-readable identifier.

In other words, as of today, an automated search of the NVD for a software product will on average miss more than half of the CVEs that have been reported for the product in 2024. Of course, this is not a good situation. I’ll discuss this further in the next post. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book, Introduction to SBOM and VEX, is available here!

Wednesday, December 11, 2024

SaaS doesn’t have to “comply” with NEC CIP. Unless the first S stands for “Security”.

I have known for a while that there’s no single “CIP/cloud problem”. Instead, there are different problems based on different cloud use cases; each of these problems has its own solutions, both short- and long-term. For example, I have usually treated SaaS (software as a service) as a monolithic entity. A SaaS implementation could never meet the definition of BES Cyber Asset, since if it is “…rendered unavailable, degraded, or misused…”, there is no way that a SaaS product in itself will adversely impact the power grid.

Therefore, I have said that the CIP use case for SaaS only impacts the requirements that have to do with BES Cyber System Information (BCSI). If the SaaS provider can give their NERC entity customer the information they need to comply with the “BCSI requirements” (CIP-004-7 R6, CIP-011-3 R1 and CIP-011-3 R2 and their respective Requirement Parts, about seven in all), the customer will be able to fulfill their NERC CIP compliance obligations regarding their use of the SaaS product.

This is true, but with one big exception: when the SaaS implementation is a managed security service, either for physical or electronic security. A few examples of this are:

1.      A cloud-based service that monitors Electronic Security Perimeter(s) for intrusions and other unauthorized access attempts. This service meets the definition of Electronic Access Control or Monitoring System (EACMS): “Cyber Assets that perform electronic access control or electronic access monitoring of the Electronic Security Perimeter(s) or BES Cyber Systems.”

2.      A cloud-based physical access control system which meets the definition of Physical Access Control System (PACS): “Cyber Assets that control, alert, or log access to the Physical Security Perimeter(s)…”

3.      A multi-factor authentication service based in the cloud, which controls access to on-premises BES Cyber Systems. This is also an EACMS.

Can a NERC entity that uses one of these services today still be compliant with all applicable CIP requirements? Probably, but the security service provider will need to provide the NERC entity with evidence of compliance with not just seven Requirements and Requirement Parts (as in the case of non-security SaaS providers), but over 120 Requirements and Requirement Parts, in the case of EACMS in the cloud. PACS in the cloud will not require that much evidence, but there are special problems with PACS in the cloud (having to do with the Physical Security Perimeter, of course) that may not be easily solved.

You might wonder why a NERC entity that was evaluating services like this would even choose a cloud-based service; in fact, they probably wouldn’t do that. However, the big problem today is the number of on-premises security services (especially SIEM) that are announcing a move to being entirely cloud-based or are at least deprecating the on-premises service. That is, they will still offer an on-premises service, but all new features and upgrades will be aimed at the cloud service.

Of course, the big question is whether a SaaS provider will be willing to provide the compliance evidence for 120 Requirements and Requirement Parts, especially when they look at the current Version 8.1 of the NERC CIP Evidence Request Tool and learn what they will have to provide. Providing this evidence isn’t a promise that the SaaS provider should make lightly, but it’s also not impossible.

I’ve been making the point that the “full solution” to all aspects of the CIP/cloud problem is unlikely to be in effect much before 2031. I’m sure some NERC entities will be content to wait that long, but I’m also sure that many NERC entities (such as the 600 attendees at NERC’s recent Cloud Services Technical Conference on November 1) are chomping at the bit now to start using the cloud for their OT systems – if they can maintain CIP compliance when they do that.

The good news is that, at least for SaaS services that use BCSI and for cloud-based security services that meet the definition of EACMS, it’s possible for a NERC entity (whether they have low, medium or high impact BES Cyber Systems) to utilize them in the cloud today - if the provider of the service is willing to make the extra effort needed to gather compliance evidence.

Moreover, it is now clear (at least to me) that, at least for SaaS and cloud-based managed security services, there is no need for the service provider to lock the systems that provide the service in a single room with a defined PSP and ESP, as some people (including me) were saying less than a year ago. That is certainly a way to ensure compliance, but it also breaks the cloud model. At that point, the service effectively becomes an offsite data center.

While that model may still be required if a NERC entity wants to literally outsource their BES Cyber Systems to the cloud, it shouldn’t be necessary for SaaS or cloud-based MSSPs - if the SaaS provider is willing to work with their customers to provide the compliance evidence they need.

If you work for a SaaS or Managed Security Service Provider and would like to discuss this post, please drop me an email.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, December 9, 2024

It’s even worse than I thought

Before our biweekly OWASP SBOM Forum meeting on Friday, I asked Andrey Lukashenkov of Vulners for an update on where the National Vulnerability Database's (NVD) backlog of “unenriched” CVE records stands[i]. Andrey said the backlog is now over 21,300; this is at least 2,000 more than it was not much more than a month ago, and of course it’s a record high number for this year. Since there have been a total of 37,000 new CVE records added to the NVD this year, this means only about 43% contain a CPE name.

In other words, on average a simple search of the NVD using a known CPE name will only discover 43% of vulnerabilities identified since February 12, the day the NVD's problems started. Even though CVE records for the other 57% of vulnerabilities are present in the NVD, they don’t contain CPE names and therefore are invisible to searches. If you want to learn whether any of those vulnerabilities apply to your product, you need to do a text search of the 21,300 “unenriched” (i.e., CPE-less) CVE records. Of course, you would need to do that for every product of concern to you, and you would have to do it as often as you want to learn about newly reported vulnerabilities, which ideally is daily. Of course, nobody is going to do this.

Andrey also pointed out something even more startling: During the first four days of the week of December 2 (and presumably also on the day we were meeting, December 6), the NVD added CPE names to exactly 0% of new CVE records. Since their problems started on February 12th, the NVD has always enriched at least a few CVE records every day (other than a single day in May).

Of course, I assume the NVD will resume adding CPE names to CVE records sooner or later. But the idea that the NVD can eliminate their backlog in 2025 (or perhaps ever) looks more and more like fantasy. CISA has added about 2,000 CPEs for exploited vulnerabilities, but the backlog figure of 21,300 presumably takes those into account. In addition, a few firms like Vulners (Andrey’s employer) and VulnCheck have taken it upon themselves to add their own CPEs to some of the unenriched CVE records; unfortunately, neither of these firms has official “Alternate Data Provider” (ADP) status, so it isn’t clear what will happen to the CPE names they created, when and if the NVD returns.

In other words, today automated searches of the NVD, and presumably vulnerability scanner output, will normally identify no more than 50% of vulnerabilities that have been reported since February 12. If you went to the doctor to diagnose your illness and they told you up front that they only knew about fewer than 50% of new diseases that have been discovered this year, would you keep going to them? That’s essentially the problem the software security community faces now.

What’s the solution to this problem? Some people have pointed to the CVE Numbering Authorities (CNAs) as the solution. These are the organizations, including a number of large software developers (e.g., Oracle, Microsoft, and Schneider Electric) and organizations like GitHub, MITRE and the Japanese JP-CERT, that create the CVE records in the first place. They report vulnerabilities in products they have developed themselves, as well as products from developers, including open source projects, that are not themselves CNAs.

The question is why the CNAs aren’t adding CPE names to the CVE records they create. Since the SBOM Forum includes several large CNAs, we have discussed this question a lot. I have heard two main answers:

1.      In the past, the NVD has usually rejected CPE names that were created by anyone other than the NVD, presumably on the grounds that only the NVD knows how to create them. Unfortunately, if the NVD has some sort of secret process they follow to create CPE names, they have never revealed it. Moreover, that process seems to include at least a few purely random elements, since nobody has ever come up with a way to predict a CPE name with certainty. For a discussion of some of the problems with CPE, as well as how they might be addressed, see this 2022 paper by the SBOM Forum (the discussion of problems with CPE is found on pages 4-6).

2.      To be honest, there seems to be little if any enthusiasm among the CNAs to start creating CPEs, precisely because so much of the process seems to be arbitrary. Nobody can be expected to invest a lot of time creating a CPE name when it has all the durability of a Jell-O sculpture.

Fortunately, there is an alternative to CPE called purl, which stands for “product URL”. In less than one decade, purl has gone from nowhere to completely conquering the open source software world. It is used as the software identifier in almost all vulnerability databases for open source software worldwide. The notable exception to this rule is the NVD and databases derived from it, which of course use CPE.

Why has purl been so successful in the open source world? This post discusses several reasons, but the most important is that a user who wants to know the purl for an open source product, which they downloaded from a package manager, will always create the same purl as any other user, as long as it is for the same version of the same product, which is found in the same package manager.

Moreover, the CNA reporting a vulnerability in the purl in a CVE record will create the purl using the same information – meaning a purl used to search for an open source project in a vulnerability database should always (barring human error) match the purl in a CVE record. Unlike the case with the NVD today, in which a search for CVEs applicable to a product will probably not reveal half of the vulnerabilities that have been identified in that product this year, a search in a purl-based open source vulnerability database like OSS Index should always yield every vulnerability that has ever been reported for the same product.

However, there are two important tasks (each with sub-tasks) that need to be accomplished, before purl can be placed on an equal footing with CPE in CVE records.[ii] They are:

First task: CVE Numbering Authorities need to start including purls in CVE records, when the product being referenced is an open source product in a package manager. While that is technically possible now due to the CVE 5.1 specification coming into effect this past spring, it turns out that virtually none of the CNAs are in fact doing this. The biggest reason is undoubtedly that neither of the two major US government-run databases, the NVD and CVE.org, currently accepts any software identifier other than CPE. So, a CVE record with a purl identifier is all dressed up with nowhere to go.

How can this situation be changed? Some group needs to conduct extensive outreach to the CNAs and to CVE.org (which runs the CVE Program, including recruiting and managing the CNAs). That outreach will include “evangelizing” about the advantages of including purls in CVE records, as well as training on the details of doing so. Just as importantly, the group needs to work with the CNAs and CVE.org to identify the policies and procedures that must be in place for purls to be successfully used in the CVE context.

One important part of this effort will be conducting an end-to-end proof of concept, in which:

1.      CNAs will include a purl whenever they create a CVE record to report a new vulnerability in an open source product found in a package manager. The purl will be based on the package manager name, as well as the product name and version string in that package manager.

2.      A purl-based vulnerability database will ingest the CVE record, just as the NVD does for CVE records now.

3.      A user who has downloaded an open source product from a package manager will easily create a purl using the package manager name, as well as the product name and version string as registered in the package manager. Since the user’s purl should always match the purl that the CNA included in the CVE record, the search should always return every CVE that has been reported for that product.

The results of this proof of concept should help convince CVE.org and the CNAs that purl is a much better identifier for open source software than CPE.

Second task: Purl needs to be able to identify commercial software, not only open source software found in package managers. A scheme for doing this was suggested in 2022 by Steve Springett, leader of the OWASP Dependency Track and CycloneDX projects and a founding member of the OWASP SBOM Forum, in the above-referenced white paper on CPE naming in the NVD. Steve’s idea is that commercial software suppliers will create standardized short documents called “SWID tags”. These will provide authoritative metadata for a software product, including the supplier name, product name and version string.

Whenever the supplier wishes to report a new vulnerability in their product, they will provide the SWID tag to the CNA who creates the new CVE record. The CNA will create the product’s purl using the information in the SWID tag; they will include the purl in the CVE record. Later, when an end user wants to learn about new vulnerabilities that have been identified in a commercial product they use, they will be able to locate and download[iii] the same SWID tag as the CNA used when they created the purl in the CVE record. The fact that both the CNA and the end user will base their purls on the same SWID tag means the purls should be identical barring human error, just as in the above case of purls for open source software distributed in package managers.

The three primary goals of the project are:

1.      To work with commercial software developers, vulnerability management service providers, and end users to identify policies and procedures for creation and use of purls based on SWID tags.

2.      To evangelize and train CVE.org staff members and CNAs on creation and use of the new SWID-based purls. Of course, this effort will build on the evangelization and training in the first task.

3.      To conduct an end-to-end proof of concept that essentially mirrors the one described in the first task, except that the purl name will always be based on the contents of the SWID tag prepared by a commercial software supplier, not the name and version string for an open source product distributed through a package manager.[iv]

Tom Alrich and Tony Turner of the OWASP SBOM Forum have developed a white paper that proposes a project to implement both of the above steps, as well as a project plan[v] for doing this. The project is called “Purl Expansion Design and Proof of Concept”. Because this project will almost certainly take more than a year to accomplish, and because neither of us is able to donate that amount of time, we are requesting donations to fund at least part of this effort. While we believe the whole project will require over $100,000 in funding, we are willing to start the project with a much more modest donation or donations.

If you or your organization are able to donate any amount over $1,000, you can donate to OWASP (a 501(c)(3) nonprofit organization) and have your donation “directed” to the SBOM Forum; this can be done either online or directly. Donations are often tax deductible.

If you would like to discuss this, please email Tom Alrich at tom@tomalrich.com

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] CVE records – i.e., records of newly-discovered software vulnerabilities – are supposed to include one or more machine-readable software identifiers called CPE names. The CPE name identifies a software product that is affected by the vulnerability identified by the CVE number. Before February 12, 2024, the NVD always created a CPE name for every product named (in a text field) in the CVE record. However, on that day the NVD’s production of CPE names dropped precipitously; it has not recovered since that day.

[ii] There should be no problem with having both a CPE name and a purl in a single CVE record, since there is no intention of purl “replacing” CPE. As long as somebody - perhaps the NVD staff, or perhaps some CNAs who prefer CPE – is willing to keep creating new CPE names, they will continue to be used. Moreover, the huge set of CPE names already created (at least 250,000, and probably more than that) will not disappear, since there is no good way to replace them with purls in existing CVE records.

[iii] End users will be able to locate and download a SWID tag, as well as other types of software supply chain artifacts like SBOMs and VEX documents, by utilizing the upcoming Transparency Exchange API. It will be fully available in 2025.

[iv] Package managers almost never distribute commercial software.

[v] The project plan primarily focuses on the second step, since the need for the first step was not apparent until very recently.

Wednesday, December 4, 2024

Daddy, when will we get to the cloud?... With luck, 2031…Daddy, can’t we get there faster?


I recently realized that, except for two low impact Control Centers that are deployed in the cloud, NERC entities are not using the cloud for any systems or information subject to CIP compliance.

I have known for a long time that there isn’t much use of the cloud for BES systems. However, I also believed that, because storage and use of BCSI (BES Cyber System Information) in the cloud became “legal” for medium and high impact environments on January 1, 2024 (when the revised standards CIP-004-7 and CIP-011-3 came into effect), at least some NERC entities had jumped at the opportunity this provided. Specifically, they could now freely utilize SaaS applications that require BCSI access. These applications are already heavily used on the IT sides of their organizations: for example, multi-factor authentication and configuration management.

In fact, I believed there was already one SaaS application that utilizes BCSI, that many NERC entities were using freely in medium and high impact environments. However, it turns out I was mistaken. Even though many NERC entities utilize this application heavily in their IT environments, for their OT environments, the SaaS provider has been supporting use of an on-premises version of their software. I’m sure that neither the SaaS provider nor the NERC customers are completely happy with that arrangement, but it seems the lack of clear guidance on how to comply with the two revised BCSI standards has discouraged NERC entities from even taking a chance on using SaaS with BCSI now.

So how are NERC entities using the cloud for systems subject to CIP compliance today? At NERC’s GridSecCon conference in October, it was stated (by someone in a position to know) that there are a grand total of two low impact Control Centers located in the cloud today (I assume they’re renewables generation Control Centers, since those are probably the easiest to locate in the cloud).

As I discussed in this post, it’s possible to maintain a low impact Control Center (which consists of BES Cyber Systems, of course) in the cloud while remaining completely CIP compliant. This is because for low impact BCS, there should not be any need for the SaaS provider to deliver compliance evidence to the operator of the Control Center. The minimal amount of CIP compliance evidence that is required for lows (mainly, evidence for CIP-003 R2 Attachment 1 compliance) can be generated entirely by the NERC entity itself.

Of course, that would definitely not be the case for a high or medium impact Control Center. Implementing one of those in the cloud today would require the platform Cloud Service Provider to furnish a mountain of compliance evidence; moreover, some of the required evidence would be literally impossible for the CSP to provide, even if they were inclined to do so.

When you think of all the advantages that the cloud provides (for example, a much more robust way of backing up systems or even entire facilities), it’s a shame that most NERC entities are not able to take advantage of them for their OT systems.

What’s even worse is that, by my estimate, it will be at least 2031 before this situation is fixed, meaning that new or revised CIP standards (and perhaps changes to the NERC Rules of Procedure) have been drafted and balloted by NERC entities at least three or four times, approved by the NERC Board of Trustees, approved by FERC (which will almost certainly take more than one year by itself), and gone through at least a two-year implementation period. If you think I’m exaggerating, please read this post and let me know where I went wrong.

Now, let’s address the last part of the title of this post, where the boy or girl asks their father how they can reach the destination faster than seven years. It would be nice if there were a magic wand answer like, “We just need to draft the changes we want in the CIP standards and get Mr/MS Jones to approve them.” However, I’ve never seen a magic wand in the NERC CIP world, nor do I expect ever to see one. What other options do we have? In other words, since we can’t modify the CIP standards in much less than seven years, what else can we do to ensure at least partial use of the cloud for NERC entities within say a couple of years?

People I’ve talked with, both NERC ERO staff members and staff members with NERC entities, seem to agree that the best alternative to waiting for the standards themselves to be changed is CMEP Practice Guides (CMEP stands for “Compliance Monitoring and Enforcement Program”). These are documents prepared by auditors from the NERC Regional Entities, who agree on how they will interpret certain unclear wording in a standard or a NERC Glossary definition during audits. The Practice Guides are not binding on auditors, but an auditor who ignores the guidance in a CMEP Practice Guide will need to explain why they did that.

Unlike changes to the NERC standards, which can originate with any party including a NERC entity, these documents need to originate with the auditors, not with NERC entities. However, there is nothing to prevent a NERC entity from suggesting a CMEP Practice Guide to the auditors. Two that I know have been suggested are:

1.      A Practice Guide to clarify how the word “monitoring” in the definition of Electronic Access Control or Monitoring Systems (EACMS) is interpreted. The person who suggested this feels that “monitoring” may be interpreted too broadly by CIP auditors. Narrowing the meaning down (there is no NERC definition of monitoring, nor is one being proposed now) would mean that some security monitoring services that are delivered through the cloud now, or will be in the future, will not be interpreted as an EACMS. This is important, since an EACMS in the cloud is subject to compliance with all the 29 CIP requirements (which include 92 Requirement Parts) that an on-premises EACMS needs to comply with. Needless to say, no CSP is ever going to provide that amount of evidence to a NERC entity, even disregarding the fact that requirements like CIP-007 R2 (patch management) and CIP-010 R1 (configuration management) are literally impossible for a CSP to comply with.

2.      A Practice Guide to clarify how a SaaS provider can maintain their customers’ compliance with CIP-004-7 R6.1’s requirement for the NERC entity to “authorize… Provisioned electronic access to electronic BCSI”, without having to get the permission of every NERC entity customer whenever the provider grants even temporary provisioned access to BCSI to a person who does not currently have provisioned access. This might require allowing the CSP to sign delegation agreements with their NERC CIP customers. Hopefully, this Practice Guide will finally clarify the BCSI compliance requirements enough to make both NERC entities and SaaS providers comfortable with utilizing BCSI in SaaS products.

Here is my idea for a Practice Guide that I believe will at least partially address problems inhibiting cloud use by NERC entities. It will do this by mapping particular NERC CIP requirements to requirements of ISO 27001/2, based perhaps on whether a committee of auditors has determined that the ISO requirement hews closely to the CIP requirement.

Of course, there are many differences between CIP requirements and requirements in ISO 27001/2, so the correspondence will never be perfect. However, the team that approves these mappings should be encouraged not to let the perfect be the enemy of the good, especially since in many cases the ISO 27001 requirement will be stronger than the CIP requirement.

I believe that probably 90% of the current NERC CIP requirements can be mapped to ISO 27001 requirements in this way. This is because, believe it or not, most of the NERC CIP requirements are objectives-based. This means they require the NERC entity to achieve a particular objective, not perform a prescribed set of tasks, as does a prescriptive requirement. Of course, how a cloud service provider achieves an objective will always be very different from how a NERC entity achieves the same objective in their on-premises environment. But they can certainly both achieve the objective of any cybersecurity requirement if it’s reasonable (i.e., not something like “Secure your systems so that no attacker will ever be able to penetrate them”).

However, there are two NERC CIP requirements that are highly prescriptive: CIP-007 R2 for patch management and CIP-010 R1 for configuration management. These two requirements are easily the biggest source of headaches for CIP compliance professionals. In fact, one NERC entity with high impact Control Centers told me a number of years ago that, of all the documentation they compiled in their Control Centers for all the NERC requirements (not just the CIP requirements), half of that documentation was due to just CIP-007 R2 and CIP-010 R1.

However, there’s a much bigger problem with these requirements than just the fact that they’re prescriptive: Both requirements must be complied with on the level of individual cyber assets (i.e., individual devices like servers). This means, for example, that the CSP would need to track every device on which any part of a system resided, even for a second, during the three year audit period. No CSP will be willing to track systems on that level, even if they happen to have that data available.

What do we do about these two requirements? Their problems can’t be fixed with just a CMEP Practice Guide; they need to be completely rewritten. Will that take seven years, like with the “cloud CIP” requirements? No, but we need to assume it will require 3-4 years, since just fixing these two requirements will be no easy task.

For one thing, both requirements need to be rewritten as objectives-based requirements. However, for CIP-007 R2, doing that requires changing it from a patch management requirement to a vulnerability management requirement. There are a number of issues that need to be considered before we do that, which I will address in a new post soon.

In other words, trying to fix CIP-007 R2 and CIP-010 R1 to allow them to work in the cloud will be a real slog (and it should be done by a different Standards Drafting Team, since I don’t want to add another four years to the current “Risk Management for Third-Party Cloud Services” SDT’s agenda; they’ll probably kill me if I do that, and frankly I wouldn’t blame them). But this has to be done. Ever since CIP version 5 (which introduced both of these requirements) came into effect in 2017, I’ve heard constant complaints about these two requirements. Cloud or no cloud, they finally need to be fixed.  

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, November 27, 2024

The fundamental problem preventing CIP compliance in the cloud today


I now believe there are five main problems that make it hard, if not impossible, for NERC entities to maintain CIP compliance while deploying certain CIP-related workloads in the cloud. Each of these problems is unique and requires a unique solution. I will discuss each of them, as well as their possible solutions, in separate posts soon. They are (in approximate order of importance):

1.      The “EACMS problem”.

2.      The medium impact renewables Control Centers problem.

3.      The SaaS/BCSI problem.

4.      The high and medium impact utility Control Centers problem.

5.      The low impact Control Centers problem.

Contrary to what many people think, it isn’t true that the NERC CIP requirements in any way “forbid” use of the cloud by assets that fall under the purview of CIP. The current requirements had their genesis in the years after 2008, when FERC approved CIP version 1. At that time, the cloud was very new. The idea that assets that control the power grid might at some point be deployed in the cloud was almost unthinkable. Therefore, the original CIP requirements said nothing about the cloud, because nobody thought it was likely this would ever become an issue.

To this day, there is no mention of the cloud in any CIP requirement or definition. This means that any NERC entity that wishes to outsource their entire OT environment to the cloud can do so without fear of being in direct violation of any CIP requirement – as long as they don’t mind receiving a boatload of Notices of Potential Violation anyway. How can this happen?

It can happen because, as any NERC compliance person well knows, remaining in compliance with any NERC requirement means being able to provide appropriate evidence of compliance. Non-binding suggestions for that evidence are usually found in the “Measures” column of the Requirement. The general rule of NERC compliance is: “If you didn’t document it, you didn’t do it.”

Of course, proving compliance with any cybersecurity standard always requires some sort of evidence. However, NERC CIP differs from other standards in that the NERC entity needs to be prepared to provide evidence that they were compliant with a CIP requirement in every instance in which compliance was required; it doesn’t matter whether the systems in question are deployed on premises, in the cloud, or both.

The problem with this is that the Measures were all developed for the on-premises use case only (except for the Measures shown in Requirement CIP-004-7 Part R6.1 and Requirement CIP-011-3 Part R1.2, which were developed with both cloud and on-premises systems in mind).

For many CIP Requirements and Requirement Parts, evidence for compliance in the cloud does not pose a problem, since the Requirement merely states the objective to be achieved; usually, the objective is implementing a policy or procedure. For example, CIP-005-7 Requirement R1 Part R1.5 requires the entity to “Have one or more methods for detecting known or suspected malicious communications for both inbound and outbound communications.”

The Measures section of that Requirement Part reads, “…documentation that malicious communications detection methods (e.g. intrusion detection system, application layer firewall, etc.) are implemented.” In other words, the NERC entity needs to document how they complied with this Requirement Part, but they are allowed to choose the technology or technologies they implement to achieve the objective. For on-premises systems, the evidence might be output produced by an IDS or an application layer firewall. For the cloud, it might be an audit report for ISO 27001 certification or FedRAMP authorization, which describes how the CSP has complied with a requirement to detect malicious inbound communications. A CIP auditor might consider both of these to be evidence of compliance with CIP-005-7 Requirement R1 Part R1.5.[i]

However, some NERC CIP Requirements are not objectives-based, but instead mandate that particular actions be performed without regard to achieving a particular objective. For example, CIP-007-6 Requirement R2 Part R2.2 requires the NERC entity to “At least once every 35 calendar days, evaluate security patches for applicability that have been released since the last evaluation from the source or sources identified in Part 2.1.”

Among other things, this tightly packed Requirement Part mandates that the NERC entity check with the vendor of every software product installed on any system within their high or medium impact Electronic Security Perimeter every 35 days, to determine whether a new security patch is available for their product. The entity then needs to determine whether the patch is applicable to their product or environment. If it is applicable, they need to either apply the patch or develop a mitigation plan.

A security patch almost always fixes one or more software vulnerabilities (often identified using a CVE number). However, according to the well-respected vulnerability intelligence firm VulnCheck, only 1.1% of publicly known vulnerabilities are observed being exploited by attackers.

Does this mean that an organization not subject to CIP compliance could safely deploy just 1.1% of the security patches that are made available for their systems? Since active exploitation is always an ex post facto measure, waiting for your system to be exploited before applying the patch is probably not a great strategy. However, there are measures of active exploitation available, such as the EPSS score and CISA’s Known Exploited Vulnerabilities (KEV) catalog, that all organizations can use to prioritize their patching efforts.

Since virtually all large organizations today have a big backlog of patches to apply but nowhere near enough bandwidth to apply them all, they must triage them. They need to accept the fact that they won’t be able to apply all patches, and instead divide them into three groups: patches they definitely will apply, others they definitely won’t apply, and still others they will apply only if time permits.

However, a NERC entity subject to compliance with CIP-007 is not allowed to consider information about active exploitation (or anything else) in deciding whether to apply a security patch. Requirement CIP-007-6 Part 2.2 does not allow NERC entities with high or medium impact BES Cyber Systems to ignore any patch because it has very low risk of exploitation. It doesn’t matter whether the patch mitigates any significant security risk or not; if it is available and it applies to the NERC entity’s configuration, it must be applied.[ii]

I don’t honestly know whether CSPs follow the NERC CIP approach and try to apply every available security patch regardless of whether it mitigates any risk, or whether they triage patches based on the risk of exploitation of the vulnerability(ies) that are mitigated by the patch. However, if they take the latter approach, they are not complying with the letter of CIP-007 R2, even though I believe a risk-based approach is best for almost any cybersecurity problem.

But there is a much bigger problem that prevents platform CSPs from producing compliance evidence for prescriptive CIP requirements in the cloud like CIP-007-6 R2, CIP-010 R1, and CIP-005 r1: Evidence for these three requirements must be produced on an individual device basis, because the requirements can only be complied with on the device level. And since single cloud workloads migrate from system to system and data center to data center all the time, a single BCS might reside on hundreds or even thousands of individual devices during a three-year audit period. There is simply no way a platform CSP could ever produce the full set of evidence for any of the prescriptive CIP requirements, even if they were inclined to do so.

However, the platform CSPs could potentially comply with CIP requirements that just mandate policies or procedures. If they do comply, it will probably be with three non-negotiable positions[iii]:

1.      They will not provide CIP compliance evidence to individual NERC entities, but only to NERC itself (or perhaps a third party designee). It will be up to NERC to share that evidence with any NERC entity that can demonstrate a need for it.

2.      The evidence the platform CSP provides will include selections from audit reports for ISO 27001 certification, FedRAMP authorization, and SOC 2 Type 2 audits. It will be up to the individual NERC entities to decide what to make of that information; NERC will never “certify” CSPs for use by NERC entities (indeed, any attempt to do so might be considered an antitrust violation).

3.      While platform CSPs are open to answering other questions besides the ones included in frameworks like the NIST CSF, ISO 27001 and FedRAMP, the questions will need to be agreed on beforehand by NERC entities. NERC will administer the questions and evaluate any evidence provided by the CSPs before distributing it to the NERC entities. However, this will not be a “compliance audit”, since neither NERC nor FERC has any jurisdiction over the CSPs.

While I think this NERC “audit” of platform CSPs needs to be part of whatever final set of solutions comes out of the current CIP standards drafting effort, I also don’t see any way this can be part of the CIP standards themselves. What I’m describing here will require changes to the NERC Rules of Procedure, even though drafting this will probably require a NERC team separate from the current Project 2023-09 Risk Management for Third-Party Cloud Services team, as well as one or two possible additional years of effort, before all the pieces for the full solution are in place. That’s why the NERC CIP community needs to think now about partial solutions that will allow NERC entities that wish to do so to make as much use of the cloud as possible, without requiring a complete rewrite of the CIP standards.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Currently, a NERC CIP auditor is unlikely to accept an audit report as compliance evidence, since there is nothing in the Rules of Procedure that allows for acceptance of audit results - other than CIP audit results - as evidence of compliance with a CIP Requirement. A permanent fix to this problem will probably require changing the Rules of Procedure, although as a temporary measure a “CMEP Practice Guide”, which is created by the NERC auditors to address an area of ambiguity in the current requirements, would probably be sufficient.

[ii] Many tools now ease the burden of compliance with this Part, although there is always a large amount of care and feeding involved with CIP-007 R2 compliance, regardless of the degree of automation.

[iii] This section just applies to the major platform CSPs, not to SaaS providers. I think the latter should be subject to a NERC “audit” as well, but it should be very different from that of the platform CSPs, since their situation is very different.