Saturday, July 5, 2025

What will the “Cloud CIP” standards protect?

While it might seem like the NERC CIP Reliability Standards are constantly going through major changes, in fact there has only been one major change since the CIP version 1 standards came into effect starting in 2008. That was the implementation of CIP versions 5 and 6 on a single day: July 1, 2016. In other words, there have so far been just two major “phases” of NERC CIP: versions 1 through 3 in 2008 through 2016, and versions 5[i] and later, starting in 2016 and continuing through today.[ii]

What distinguishes the first two phases of CIP, and how will we know when we’ve entered a new phase? It’s simple: You need to look at the “Purpose” statement at the beginning of each CIP standard and see what the standard is there to protect. You’ll know we’re in a new phase if the statement has changed since the previous version of the standard.

For example, the Purpose statement of CIP-002-1 (which came into effect in 2009) reads, “NERC Standards CIP-002 through CIP-009 provide a cyber security framework for the identification and protection of Critical Cyber Assets to support reliable operation of the Bulk Electric System (my emphasis).” Versions 1-4 of the NERC CIP standards were all designed to protect Critical Cyber Assets; these were defined as cyber assets that are “essential to the operation of” a Critical Asset (Critical Assets were certain Control Centers, generating stations and transmission substations that played a special role in the BES).

Starting with the CIP version 5 standards in 2016 and continuing today, the Purpose statements have referred to BES Cyber Systems[iii]. For example, the Purpose statement of CIP-007-6 (the current version of CIP-007) reads, “To minimize the risk against compromise that could lead to misoperation or instability in the Bulk Electric System (BES) from individuals accessing BES Cyber Systems by requiring an appropriate level of personnel risk assessment, training, security awareness, and access management in support of protecting BES Cyber Systems.”

However, when what I call the “Cloud CIP” standards, which are being developed by the current Project 2023-09 Risk Management for Third-Party Cloud Services Standards Drafting Team, come into effect, that will mark the beginning of a third phase; the new or revised standards will undoubtedly require new Purpose statements, as well as new applicable asset types.

For example, BES Cyber Systems are defined as groupings of BES Cyber Assets[iv], which are themselves a type of Cyber Asset. Because both BES Cyber Asset and Cyber Asset refer to physical devices, and because it’s not possible to track usage of physical devices in the cloud for compliance purposes, this means that BES Cyber Systems, as currently defined, cannot be the basis for CIP compliance in the cloud.

However, this doesn’t mean the term BES Cyber System will disappear when the Cloud CIP standards are implemented. Instead, it’s likely that the Cloud CIP standards will have two “tracks”: one for “on premises” BCS and the other for BCS implemented in the cloud. It’s safe to say that all NERC entities that operate on premises BCS today will continue to have them after the Cloud CIP standards come into effect, since there is no way that any electric utility (or generator, for that matter) could outsource their entire physical operations to the cloud.

In fact, it’s possible that, once the Cloud CIP standards come into effect, the Purpose statements of standards CIP-003 through CIP-011 and CIP-013 (the standards that are in place today and refer to BES Cyber Systems) will refer to protection of “On Premises BES Cyber Systems”, while the Purpose statements of one or more new standards (starting with CIP-016) will refer to protection of “Cloud BCS”.

But there’s something else that the current CIP standards protect, even though it’s not mentioned in any of the Purpose statements: BES Cyber System Information (BCSI). There are only three CIP Requirements and seven Requirement Parts that refer to BCSI: CIP-004-7 Requirement R6 Parts 6.1, 6.2 and 6.3; CIP-011-3 Requirement R1 Parts 1.1 and 1.2, and CIP-011-3 Requirement R2 Parts 2.1 and 2.2.

The BCSI concept has been around since CIP version 5 came into effect in 2016 (in fact, the definition is unchanged since then), but it was only when the current versions of CIP-004 and CIP-011 came into effect on January 1, 2024 that it became possible to store or utilize BCSI in the cloud, without fear of violating one or more of the BCSI requirements. Unfortunately, very few NERC entities have taken advantage of this development yet. 

Will the BCSI concept still be around when the Cloud CIP standards come into effect – i.e., when the third NERC CIP phase begins (which will likely be 3-6 years from now)? Yes. This is because, as far as CIP compliance is concerned, there are two types of services that need to be enabled in the cloud:

1.      “Systems in the Cloud”. These include cloud based BES Cyber Systems (“Cloud BCS”), cloud based Electronic Control or Access Monitoring Systems (“Cloud EACMS”), and cloud based Physical Access Control Systems (“Cloud PACS”).

2.      SaaS that uses BCSI.[v] I don’t believe that, when the Cloud CIP standards are implemented, any change will be needed, either to the definition of BCSI or to the three requirements that deal with BCSI today: CIP-004-7 R6, CIP-011-3 R1 and CIP-011-3 R2.

In discussions of the cloud and CIP, including discussions of the Standards Drafting Team now considering how Cloud CIP will work, the question often comes up about the difference between a cloud-based BES Cyber System and a SaaS application that performs the same function as a BCS. The best example of this question is a SCADA system. In other words, what’s the difference between transferring the software running on an on premises SCADA system to the cloud and utilizing the same SCADA software in the cloud as SaaS?

In my opinion, this is an easy question to answer. The defining characteristic of a BES Cyber System is that “if rendered unavailable, degraded, or misused”, it “would, within 15 minutes of its required operation, misoperation, or non-operation, adversely impact one or more Facilities, systems, or equipment, which, if destroyed, degraded, or otherwise rendered unavailable when needed, would affect the reliable operation of the Bulk Electric System.”[vi]

This means that a BCS installed in the cloud would need to be directly connected to some device in the “real world”, like a relay; that is the only way it could have a 15 minute impact. However, if the same SCADA software were installed in the cloud but didn’t have any direct connection to a device that can affect the BES (for example, if the software in the cloud doesn’t send a command to the relay, but instead advises the operator to send the command), it would be SaaS.

As I mentioned above, the two revised CIP standards that deal with BCSI, CIP-004-7 and CIP-011-3, were designed to make storage and use of BCSI in the cloud completely “legal”. While there are certainly other reasons to store and use BCSI in the cloud, the most important use is in SaaS applications that require use of BCSI, including configuration management, identity and access management, etc.

Even though the two revised standards came into effect on January 1, 2024, there has been a lot of confusion about how to comply with them. Probably because of this confusion, NERC entities are currently only using a small number of SaaS applications that require BCSI access.

Since it is now “legal” to store and utilize BCSI in the cloud, what’s the status of cloud based BCS? The compliance obligations for the two types of services are very different. There are only ten CIP Requirements and Requirement Parts that apply to SaaS use of BCSI, whereas there are over 100 Requirements and Requirement Parts that apply to cloud-based BCS (there are fewer than 100 Requirements and Parts that apply to EACMS or PACS in the cloud, but in both of these cases, there are many more Requirements and Parts than there are for SaaS that uses BCSI).

In other words, use of BCSI in SaaS is quite possible today, but use of cloud-based BCS, EACMS and PACS is still close to impossible – not because implementing these systems in the cloud directly violates any CIP requirement, but because it is highly unlikely that any CSP could ever produce the evidence required for a NERC entity to prove compliance with even a few of the 100+ currently applicable CIP Requirements and Requirement Parts[vii].

This is why NERC entities wishing to utilize cloud-based BCS, EACMS and PACS will need to wait for the “Cloud CIP” requirements to be implemented. However, that isn’t all that’s required. Because enforcing cloud-based CIP standards will likely require changes to the NERC Rules of Procedure, there might need to be a separate process to draft and ballot those RoP changes.

In other words, unless NERC takes some extraordinary measures soon (which is possible but not likely), implementation of medium and/or high impact BCS, EACMS or PACS in the cloud is still many years away; however, use of SaaS applications that require BCSI access is “legal” now. I plan to discuss how that can be done very soon.

My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or put up a paywall and charge a subscription fee, or both. However, I really don’t want to do either of these things. It would be great if everyone who appreciates my posts could donate a $20-$25 (or more) “subscription fee” once a year. Will you do that today?

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.



[i] You may notice I didn’t mention CIP version 4. CIP v4 was approved by FERC in 2012, but never came into effect. This was because FERC surprised many people (including me) by approving CIP version 5 in April 2013, and announcing that v4 would never come into effect.

[ii] Until CIP versions 5 and 6 were implemented in 2016, all the CIP standards were revised at the same time, even when only one or two of them had changed. For example, when FERC approved the CIP version 2 standards, they ordered a change that just applied to CIP-006-2: adding a Requirement R2, for escorted access of visitors inside a Physical Security Perimeter.

However, instead of just developing and balloting a new version of CIP-006 (CIP-006-3), the Standards Drafting Team had to revise each of the other standards, just to change the version number of the standard from 2 to 3. Not only did the SDT have to do that, but they had to submit all the standards to each of the three or four ballots that were required for NERC entities to approve the new version of CIP-006 (few if any new or revised CIP standards have required fewer than four ballots, before they were finally approved by the NERC ballot body). Thus, instead of just having to submit one standard for each of four ballots, the drafting team had to submit eight standards (CIP-002 through CIP-009. Note that CIP-010 and CIP-011 were added to the CIP standards with CIP version 5) for each of four ballots.

After that experience, the decision was made not to try to “rev” all the standards at once. Unless the wording of a standard has changed, its version number does not need to change. Of course, at that point the concept of a version number for the CIP standards as a whole (e.g., “CIP version XX”) no longer made sense. Instead, the individual CIP standards now carry version numbers from 2 through (soon) 9; they rev independently of each other. This doesn’t seem to be causing confusion, if it’s even noticed at all today.

[iii] I’ve always said that no statement can ever be made about NERC CIP that doesn’t have at least one exception. That’s certainly the case with this sentence. There are now three CIP standards that furnish exceptions to the rule that all new or revised CIP standards since 2016 are designed to protect BES Cyber Systems (BCS). First, CIP-012’s Purpose statement says that it protects “data transmitted between Control Centers”; it doesn’t mention BES Cyber Systems at all. Second, CIP-014’s purpose is to protect “Transmission stations and Transmission substations, and their associated primary control centers…”; again, there’s no mention of BCS. Finally, the recently approved CIP-015-1’s Purpose is “To improve the probability of detecting anomalous or unauthorized network activity in order to facilitate improved response and recovery from an attack.” Once again, BCS aren’t mentioned.

The reason these three standards don’t refer to BES Cyber Systems is so they can continue to be enforced even when BCS are replaced with another type of system (in other words, the requirements in these standards don’t apply to individual systems, so there’s no need to mention systems in the Purpose statements). On the other hand, standards CIP-002 through CIP-011 (as well as CIP-013) all apply to BCS; therefore, the Purpose statements will need to be rewritten when the concept of BCS is replaced with a new concept, or at least when the meaning of BCS is changed. As discussed later in this post, the meaning of “BES Cyber System” might change when the “Cloud CIP” standards are implemented.

[iv] The NERC Glossary definition is “One or more BES Cyber Assets logically grouped by a responsible entity to perform one or more reliability tasks for a functional entity.”

[v] Note that low impact cloud BCS have always been “legal” under CIP, although only a small number of low impact BCS have been implemented in the cloud, mainly in cloud-based low impact Control Centers.

[vi] Of course, this language is from the definition of BES Cyber Asset, not BES Cyber System. Since BCS is defined as a group of BCAs, a BCS “inherits” this property from its component BCAs.

[vii] For example, how could a CSP ever prove to a NERC entity that it had complied with CIP-007 Requirement R2 (patch management), since that requires tracking every device on which any part of a BES Cyber System has been installed at any moment throughout the three-year audit period - and since at any time, a single BCS might be split among 100 different servers in multiple data centers and might jump from server to server or data center to data center every minute of every day?

Note that, when it comes to SaaS providers, the story may be quite different, since no device-level information is required for compliance with the three BCSI requirements. Moreover, SaaS applications are often focused on particular industries, and their providers are usually accustomed to working with individual customers to meet their special needs. They are much more likely to bend over backwards to accommodate an electric utility customer than will a platform CSP.

Tuesday, July 1, 2025

Whose side is the NVD on, anyway?


Some of the most interesting things I learn while writing this blog are stories I hear that aren’t the subject of the post I was working on when I heard them – but which prove to be every bit as interesting  as the post itself. This happened with my last post, in which I paraphrased an email from Bruce Lowenthal, Senior Director of Product Security for Oracle.

Before I go into detail on what Bruce said in his email, I want to make some background points for people who haven’t been following the NVD controversy as much as some of us have:

1.      CVE records are created by CVE Numbering Authorities (CNAs); these are organizations (over 400 today) that voluntarily report new vulnerabilities (CVEs) to the CVE Program. The most prolific CNAs are commercial software developers like Oracle, Microsoft, Schneider Electric, ServiceNow and HPE; these organizations report newly identified vulnerabilities in their own products. Other important CNAs include organizations like GitHub, Red Hat and the Linux Kernel; they mostly report vulnerabilities in open source projects.

2.      CPE names are machine-readable identifiers for software products. The NVD identifies vulnerable software products using CPE names. A CPE name includes fields like the name of the product, its version number, the vendor name, and others. Note that a software product cannot be identified through an automated search in the NVD (or another vulnerability database that is based on CPE) unless it has a CPE name. Given that there are now more than 300,000 CVE records in the NVD, it is no longer practical to rely on text searches to identify vulnerable products.

3.      Since software products and software vendors are referred to by many different names, by different parties and at different lifecycle phases, the NVD has always tried to maintain tight control of CPE names. Moreover, it has discouraged other parties (including the software developers themselves) from creating their own CPE names.

4.      When a CNA creates a new CVE record and submits it to the CVE.org database, they include a textual description of one or more products that are vulnerable to the new CVE, for example, “Product ABC version 2.7 from XYZ Corporation”. However, the CNA doesn’t normally create a new CPE name for the product, in deference to the NVD’s wishes.

5.      The NVD regularly downloads new CVE records from CVE.org and incorporates them into the NVD database. Of course, at that point the new records do not usually include CPE names.

6.      Until February 2024, an NVD staff member (usually a contractor to NIST) almost always created a CPE name for a vulnerable product described in a new CVE record within a few days of the record’s first appearance in the NVD. As already mentioned, that NVD staff member might use one of many product names, e.g., “Microsoft Word”, “Microsoft Office Word”, “Word”, “Office 365 Word”, “Microsoft Word Swahili Edition”, as well as one of many vendor names, such as “Microsoft”, “Microsoft, Inc.”, Microsoft Inc”, “Microsoft EMEA”, “Microsoft APAC”, etc. The NVD has never published a formal methodology for choosing the product or vendor name to be included in a new CPE name. Thus, it is close to impossible to predict the product or vendor name that was included in the CPE name for a product, meaning it is very hard to predict the CPE name itself.

7.      For reasons that have never been adequately explained, the NVD stopped regularly creating CPE names for all new CVE records in February 2024. The result of this problem is that fewer than half of the new CVE records that have appeared in the NVD since February 2024 have CPE names. This means those records are essentially invisible to automated searches in the NVD (for example, searches using the command line), since the user searching for newly identified vulnerabilities in a software product will have to guess which product and vendor names were included in the CPE name for the product. This means almost any vulnerability search in the NVD will require some “manual” steps; it can never be fully automated.

8.      As my previous post pointed out, this is one of the main reasons why software end users are forced to devote so many resources to vulnerability management today.

In his email to me last week (which I had solicited), Bruce wrote that since last November, the NVD has not made it a priority to assign CPEs to newly identified CVE records within a few days, as they almost always did in the past. Instead, they now usually assign CPEs to CVE records that are already over a month old.  This makes no sense; CPE assignment should focus on CVEs published less than a month ago, not more than a month ago. And it should not focus at all on CVE records published more than three months ago.

And yet, the NVD seems to be focusing mostly on records that are more than three months old (for example, the NVD is still adding CPEs to records first published in 2024), while not focusing much at all on records that appeared in the current month. In fact, since last November, Bruce says there hasn't been a single month in which the NVD has assigned a CPE to even 50% of the CVE records published in that month.

For example, in June 2025 the NVD assigned a CPE to 194 CVE records that were originally published more than three months previously, but they only assigned a CPE to 356 CVEs that were published in May or June. It was simply a waste of time for the NVD to patch the 194 old records, when at least 5,400 new CVEs were published in May or June, that still don’t have a CPE name.

Why is this important? In my previous post, I went on to say (with some rephrasing), “Bruce points out that this is an almost totally useless exercise, since software suppliers are much more likely to have patched older vulnerabilities than recently identified ones. If the software supplier releases new patches within a few weeks of the vulnerability being discovered and the user applies those patches quickly, most CVE records that are more than a month old are likely to be irrelevant. If a user organization has already patched a vulnerability, it obviously no longer poses a risk to the organization.”

In other words, the NVD now seems to be waiting to attach a CPE to a CVE record when the record is so old (3 months or older) that doing this won’t be very helpful at all. They do this instead of making the extra effort required to create the new CPE shortly after the CVE record is introduced, as they almost always did before February 2024. How does this make sense?

In his email to me, Bruce went beyond the statements I paraphrased above. Here are the other statements he made (with some paraphrasing). I’ve added my interpretations in italics:

  1. “NVD has not assigned CPEs to CVEs published in any month after November of last year.” Starting last November, the NVD seems to have stopped even trying to add CPE names to CVE records in the first month after the new record appears in the NVD. In other words, if they add a CPE name at all, they now do so more than a month after the new CVE record has appeared in the NVD.
  2. “NVD continues to assign CPEs to old CVEs…No one cares if CPEs are assigned to CVEs published in 2024.  Assigning a CPE name to a CVE published in 2024, in preference to a CVE published in the most recent month or two, is a waste of time and resources.”  The reason why Bruce says this is that even very slow vendors try to patch severe vulnerabilities within 2-3 months. Thus, if their customers apply the patches soon after they receive them, older vulnerabilities no longer pose a significant threat.
  3. However, newly identified vulnerabilities are sometimes not patched for a few months, either because the vendor hasn’t issued the patch or the customer hasn’t applied it. This means that customers need to be reminded about recently identified vulnerabilities in the products they use. Yet, it is precisely these vulnerabilities that customers aren’t being reminded about, since the lack of CPE names in recent CVE records means that customer searches won’t identify those CVEs as posing a problem to them.
  4. “Many people expect vendors to upgrade products with third party security fixes within a couple of weeks or a month.  However, because the NVD is now delaying assignment of CPE names to new CVE reports, vendors are struggling to issue some patches within three months of when the new report appeared.”
  5. Bruce’s point here is that a lot of people aren’t aware that software vendors often release patches created by third parties for a library that the vendor included in one of their products. If that library is affected by a recent CVE that has been reported, but that record doesn’t include a CPE name, the vendor may not learn that the library is affected by the new CVE; therefore, they won’t release a patch until after the NVD has created the CPE name for the vulnerable version of the library and added it to the CVE record. And that may be several months after the vulnerability was reported.

When the NVD started having their problems in February 2024, a lot of people in the vulnerability management community reflexively defended the NVD, because of their general good experiences with it; I was one of those people at first. However, as time dragged on and the NVD never provided a good explanation for their problems – beyond the fact that, like all of us, they would like to have more money – I began to lose patience with them. I especially lost patience when they once or twice announced that they had received more money and would have the problem licked by a ridiculously optimistic date, such as September 30, 2024. As already stated, their problem has only gotten worse, not better.

However, I must admit that I find the NVD’s current actions to be inexplicable. They have finally begun to add more CPE names to CPE records than they did earlier this year, but they seem to be restricting themselves to doing this for records that are too old to matter anymore. Meanwhile, they are neglecting the CVE records that do matter: the ones that are less than 30 days old.   

NIST recently announced they are “auditing” the NVD. I hope they’ll point out in their audit findings that just focusing on old records that don’t matter anymore, rather than new records that do matter, doesn’t help anybody.

My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or put up a paywall and charge a subscription fee, or both. However, neither of those options appeals to me. It would be great if everyone who appreciates my posts could donate a $20-$25 “subscription fee” once a year (of course, I welcome larger amounts as well). Will you do that today? 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Friday, June 27, 2025

How can we get out of the CVE rat race?


Chris Hughes’ June 24 post pointed out what a lot of people involved in vulnerability management already knew: The number of new CVEs is rapidly growing, along with the cost of tracking and patching those CVEs. Those increased costs include both money and staff time, but also the huge costs due to having to divert management attention from activities that will grow the business to tedious vulnerability management tasks that don’t grow anything except aggravation.

The numbers behind Chris’s argument all came from a paper that was recently put out by Chainguard, a company that has quickly become one of the two or three thought leaders in the field of software security. I paid close attention to the post, primarily because I think highly of Chainguard. I consider anything they feel strongly about to be worthy of an investment of my time – and they certainly feel this is an important topic. I read both Chris’s summary of Chainguard’s paper and the paper itself.

The primary takeaway from both Chris’ post and the Chainguard paper was that organizations that decide to undertake vulnerability management on their own are making a big mistake. Instead, those organizations should seriously consider outsourcing vulnerability management to a services vendor that specializes in vulnerability management. They should do this because such a vendor can manage their vulnerabilities much more efficiently than any single software end user or software developer organization. Of course, the only vendor mentioned in the paper is Chainguard itself.

I have no objection to a company putting out a paper that describes a problem that is likely to affect many of the people that read the paper, and then using that description as a lead-in to a sales pitch for their services. In fact, I doubt there are many technology companies that haven’t done exactly this. However, I think Chris could have mentioned that there are other ways to beat the vulnerability management rat race, other than just suggesting that readers outsource the whole problem to Chainguard or any other services vendor.

I say this because I realized almost three years ago that vulnerability management can never really be successful unless it can be fully automated. At the same time, I realized - when the SBOM Forum (now the OWASP SBOM Forum) published a white paper titled “A Proposal to Operationalize Component Identification for Vulnerability Management” - that various problems, like those described in pages 4-6 of the paper, have made a fully automated (or “operationalized”) vulnerability management process all but impossible today. Until those problems are addressed, vulnerability management will always be a slow, “manual” process – and outsourcing it may well be the only good solution to the problem.

The problems pointed out in the paper were all related to the fact that the only software identifier utilized in the National Vulnerability Database (NVD), “Common Platform Enumeration” or CPE, is inefficient in many ways. In the paper, we recommended that both the CVE Program (which is run by the MITRE Corporation, overseen by the Department of Homeland Security) and the NVD (which is run by NIST, part of the Commerce Department) start moving toward implementing use of the purl (product URL) identifier, along with CPE.

I’m pleased to report that the CVE Program is now laying the groundwork for implementing purl along with CPE, although the process may not be finished until the end of 2025 or even later than that. However, I’m not pleased to report something that I’ve written about a lot: starting in February 2024, the NVD greatly curtailed their performance of a vital task that they have performed for more than a decade. In fact, the NVD has insisted that only they can perform this task, namely identifying vulnerable products in CVE records by creating new CPE names for them.

Since CVE records usually just include a textual description of the product(s) affected by the vulnerability, if the record does not also contain a machine-readable CPE name to identify the product, this means that an automated search of the NVD – e.g., using the NVD’s search bar – will not identify the product(s) affected by a CVE. The user (or a service provider working for them) may need to conduct “manual” text searches to learn if a product they use is affected by any recently identified CVEs. Of course, given that over 50,000 new CVEs are being identified each year and that number is itself growing rapidly, any non-automated vulnerability management process will be unsustainable, even if it’s outsourced.

How bad is this problem? According to Andrey Lukashenkov of Vulners and Bruce Lowenthal of Oracle, in 2024 fewer than half of CVE records had CPEs assigned, while in 2025 those numbers have gotten even worse. In fact, Bruce says that since last November, the NVD has completely stopped assigning CPEs to newly identified CVE records within a few days, as they almost always did in the past. Instead, they are now assigning CPEs exclusively to records that are more than a month old.

Bruce points out that this is an almost totally useless exercise, since software suppliers are much more likely to have patched older vulnerabilities; if the software supplier releases new patches within a few weeks (which the best ones do, whenever possible) and the user applies those patches quickly, most CVE records that are more than a month old are likely to be irrelevant. In other words, the NVD now seems to be waiting until attaching a CPE to a CVE record won’t be very helpful at all, rather than making the extra effort required to create the new CPE shortly after the CVE record was introduced, as they almost always did before February 2024.

Bruce also points out that in June 2025, CVE.org (where CVE records originate) has included the vendor name in about 94% of CVE records, whereas the NVD has only included vendors in about 27% of those records; since the beginning of 2025, the NVD’s record has been similar in every month. It’s no wonder that NIST recently announced an audit of the NVD; perhaps they can figure out what is going on, since I certainly can’t (and don’t tell me the NVD’s problems are due to a lack of funds. They announced last year that they finally had enough funds, but the problems have literally gotten worse since then, not better).

The bottom line is that, through almost all of 2024 and so far in 2025, any automated search of the NVD for vulnerabilities that apply to a software product has likely yielded fewer than half of the vulnerabilities that have been recently identified in the product. This means that users of those products may never know all the vulnerabilities to which their product is exposed, until it is too late. Of course, this is not a sustainable situation.

Introducing purl identifiers into CVE records (which will need to be done by the CVE Numbering Authorities – CNAs – that create the records) will help address this situation for open source software (as described in the 2022 SBOM Forum white paper linked earlier). However, since currently purl doesn’t address software not delivered through package managers (which includes almost all commercial software), a new procedure (or purl “type”) will need to be developed to identify commercial software products.

The OWASP SBOM Forum would like to start working on this problem soon, in conjunction with commercial software developers, as well as any other concerned parties. If your organization would consider donating to OWASP to support this effort (OWASP is a 501(c)(3) nonprofit organization), please email me.

Since CPE is now the only identifier for commercial software products, and since most newer CVE records do not contain any CPE at all, this means vulnerability management for commercial software has turned into an oxymoron – it is literally becoming impossible. At a minimum, a fully implemented solution to this problem is two years away, and perhaps longer than that. However, the clock won’t even start running on a solution until we start developing one; it is that simple.

Therefore, while I think all organizations that are concerned about vulnerabilities in the software they utilize or develop should consider outsourcing their vulnerability management work to a company like Chainguard as a short term solution, in no way should this be considered a final solution to the CVE rat race. The final solution is to replace the NVD with a truly robust Global Vulnerability Database that will support multiple types of software identifiers (especially CPE and purl), as well as multiple vulnerability identifiers (including CVE, OSV, GitHub Security Advisories, etc.).

In other words, vulnerability management needs to become a rigorous, fully automated process. It shouldn’t require an expert third party to fill in the automation gaps. However, since there is always more that can be done, even when today’s vulnerability management is fully automated, there will still be a need for third parties like Chainguard to push the boundaries beyond what is possible through automation today. I hope they continue to do that.

My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or charge a subscription fee or both. However, neither of those options appeals to me. It would be great if everyone who appreciates my posts could donate a $20-$25 “subscription fee” once a year (of course, I welcome larger amounts as well!). Will you do that today?

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, June 19, 2025

I changed yesterday’s post on NERC CIP


Kevin Perry is the retired former Chief CIP Auditor for the SPP Regional Entity and co-chair of the NERC Standards Drafting Team that drafted CIP versions 2 and 3, as well as a member of the team that drafted NERC Urgent Action 1200, the voluntary predecessor to the NERC CIP standards (and still very much the foundation of those standards). I’ve known Kevin well since he introduced himself to me at an SPP meeting on CIP in I believe 2011.

Kevin and I had huge email discussions - where our replies were all in different colors. We ran through all the primary colors and most of the secondary ones as well – about the many big issues that came up as the CIP version 5 standards were being drafted and implemented in 2011 to 2015 (CIP version 5 is essentially the version we still follow today. It’s where terms like BCS, ESP, PACS, EACMS, ERC, IRA, etc. were introduced into CIP). He often ruined my day by telling me that the post I’d just taken almost a day to write was flawed and needed to be corrected. You’ll be pleased to know that we’re still having some of the same arguments we had then – of course, he continues to be very unreasonable in not accepting my positions (😊).  The nerve of that guy!

True to form, he ruined my day today by telling me that the post I put up yesterday (which took at least eight hours to write on Tuesday and Wednesday. It turned out to be the 1200th post I've written since I started this blog in 2013) had a serious flaw. However, in this case I can’t be blamed for it – it turns out the NERC auditors made a decision I didn’t know about until I received Kevin’s email. Of course, I certainly wouldn’t expect the auditors to tell me about this, but Kevin knows most of them very well (he mentored a number of them when they worked for him at SPP RE).

You can learn all the gory details (or most of them, anyway) in the italicized text I’ve inserted into yesterday’s post. However, the main takeaway is that, even though the NERC Regional auditors decided there is no need for a “CMEP Practice Guide” to remove what many of us believed might become a “showstopper” impediment to NERC entities using SaaS with BES Cyber System Information (BCSI), they say this because they think the problem was already adequately dealt with – specifically, by a document NERC endorsed as Implementation Guidance for CIP-004-7 and CIP-011-3 (the two revised standards that came into effect on 1/1/2024 and were expected – prematurely, as it turns out – to lead to NERC entities feeling comfortable using SaaS with BCSI) in December 2023.

Thus, the moral of yesterday's story is unchanged: SaaS providers (and software developers who want to start delivering their software as a service) shouldn’t be afraid of using BCSI with their products, and NERC entities with high and/or medium impact BES Cyber Systems shouldn’t be afraid of giving SaaS providers access to their BCSI. However, both SaaS provider and NERC CIP customer need to keep in mind that they will still have to provide the required compliance evidence for CIP-004-7 R6, CIP-011-3 R1 and CIP-011-3 R2.[i] 

My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or charge a subscription fee or both. However, neither of those options appeals to me. It would be great if everyone who appreciates my posts could donate a $20-$25 “subscription fee” once a year (of course, I welcome larger amounts as well!). Will you do that today?

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I didn’t emphasize this point in my post yesterday. I probably will in the future, although perhaps just at a high level. If you would like to discuss this topic with me, let me know.

Wednesday, June 18, 2025

NERC CIP in the Cloud: Is it time to start using SaaS?

I’ve written at least a few times about the difference between SaaS (software-as-a-service, although “software that runs in the cloud” is a more accurate description) and BCS (BES Cyber Systems) that are deployed in the cloud (for those unfamiliar with NERC CIP, BCS are the systems that the 13 current NERC CIP standards, CIP-002 through CIP-014, are there to protect. These are the systems whose loss r compromise would impact the Bulk Electric System (BES) within 15 minutes, although the impact wilusually be instantaneo.

Note from Tom 6/19: Kevin Perry, former Chief CIP Auditor for the SPP Regional Entity and co-chair of the team that drafted CIP versions 2-4, pointed out a couple of problems with this post as I wrote it yesterday. I've discussed them in italics below, although I decided not to change what I wrote yesterday - just point out where I was wrong.

Both SaaS and cloud BCS consist of software running in the cloud. They both provide advice and/or monitoring data to their operators. However, since the loss of a BCS, whether deployed in the cloud or not, will by definition impact the BES within 15 minutes, this means a BCS can provide more than advice; it can provide control or real-time monitoring. In other words, a BCS always has some sort of connection with a device (like an electronic relay that controls a circuit breaker) that impacts or monitors the power grid. Most importantly, the BCS needs to directly control, or monitor the output of, the grid-connected device. If a system’s impact on the grid is dependent on a human being taking some action first, then it’s not a BCS. 

Note: Kevin Perry pointed out that the converse of this last statement isn't true. That is, if a system's (negative) impact on the grid is dependent on a human being not taking a particular action (and that negative impact occurs within 15 minutes), then it's a BCS. He used the example of a SCADA system that notifies an operator when there's a problem that needs attention, so the operator can take actions to fix the problem. If that system is compromised and doesn't notify the operator of a problem - and that lack of notification leads to a negative impact on the BES within 15 minutes - then clearly the system can have a 15-minute BES impact and is therefore a BES Cyber Asset and also part of a BES Cyber System. 

This means that the same software product could be deployed in the cloud in two quite different ways. Let’s use the example of software that monitors power flows and can detect a dangerous anomaly in real time. If the software simply sets off an alarm and warns the operator of the anomaly – on the assumption that the operator will perform the steps necessary to protect the BES - then the software is SaaS. However, if the software is directly connected to an electronic relay in a substation, which itself directly controls the circuit breaker, then it may be a BCS. This is because it affects the BES within 15 minutes, without requiring human intervention.

When I mentioned earlier that the purpose of the CIP standards is to protect BES Cyber Systems, I omitted the fact that the CIP standards also protect BES Cyber System Information (BCSI)[i]. Since there are far more than ten times as many requirements (and “requirement parts”, which is NERC’s word for what are usually known as subrequirements) that apply to BCS than there are for BCSI, the main emphasis is almost always on protecting BCS. A BCS consists of one or more hardware devices and the software that runs on those devices.

However, when it comes to the cloud, hardware essentially disappears. Of course, everyone knows that the software in the cloud runs on hardware, but the big advantage of using the cloud is that the end user doesn’t need to ensure protection of the hardware – just the software. There are two basic ways to utilize software installed in the cloud.

1. One way is to install and manage the software yourself (although in many cases the OS and other supporting software products are managed by the CSP). This is how BES Cyber Systems can be utilized. Currently, only a small number of low impact BCS are installed in the cloud, since the CIP standards don’t pose any impediment to installing low BCS in the cloud.

However, it is close to impossible to “legally” install medium and high impact BCS in the cloud. This isn’t because the current CIP requirements directly forbid this happening, but because it would be literally impossible for the CSP (by which I mean a “platform” CSP) to provide the required evidence of compliance with requirements like CIP-007 R2 patch management and CIP-010 R1 configuration management. Any NERC entity that utilizes cloud-based BCS but can’t provide the required evidence for compliance with all current CIP requirements that apply to BCS is likely to be hit with a lot of violations. This is almost certainly why I have never heard of a NERC entity that has knowingly installed medium or high impact BCS in the cloud.

2. On the other hand, since SaaS consists of just software and is completely abstracted from the hardware it runs on, the only CIP requirements that apply to a NERC entity’s use of SaaS are the three requirements (along with a total of seven requirement parts) that apply to BCSI: CIP-004-7 R6, CIP-011-3 R1 and CIP-011-3 R2. NERC entities with high or medium impact BCS are free to use SaaS all they want, but if the information they store and/or utilize meets the BCSI definition, they must comply with those requirements.

In the previous versions of CIP-004, several requirement parts were worded so that they effectively prohibited usage in the cloud. This wasn’t intentional, of course. However, until recently the NERC community and FERC considered the cloud to be too untrustworthy to be considered as a home for anything having to do with the power grid. Therefore, it was never even considered to be an option for systems subject to CIP compliance. CIP-004-6 (the version of CIP-004 that was in effect until January 1, 2024) didn’t make any provision for encrypting BCSI at rest, since all storage locations for physical or electronic BCSI were assumed to be under the direct control of the NERC entity. For this reason, anyone with physical or logical access to the server(s) where BCSI was stored was considered to have access to the BCSI itself, whether encrypted or not.

Fortunately, this attitude was changing by 2018, when a new NERC Standards Drafting Team was constituted to fix the wording problems in CIP-004. The team was tasked with making it clear that encryption of BCSI makes it inaccessible to anyone without access to the decryption key, even if they have physical and/or electronic access to the server(s) where the BCSI is stored.

Besides making changes to CIP-004, the drafting team also changed CIP-011 R1, which requires that NERC entities with medium and/or high impact BCS develop an Information Protection Program for all BCSI, no matter where it resides or is used. The changes (to the Methods section of R1.2) made it clear that encryption – along with some other methods of data obfuscation – renders BCSI unusable to anyone that does not have access to the decryption key. Encryption wasn’t considered to be a protection in previous CIP versions; in fact, encryption was never even mentioned in the CIP standards until CIP-011-3 came into effect (along with CIP-004-7) on January 1, 2024.

In 2018, I thought that the changes needed to fix the BCSI-in-the-cloud problem would be extremely complicated. However, the SDT came up with an ingenious solution that involved minimal changes, including:

1.      Modifying the existing CIP-004 requirements to remove the language that effectively prohibited cloud storage of BCSI;

2.      Adding a new requirement CIP-004-7 R6 that introduced a concept called “provisioned access”. R6 states that provisioned access occurs when “…an individual has both the ability to obtain and use BCSI. Provisioned access is to be considered the result of the specific actions taken to provide an individual(s) the means to access BCSI (e.g., may include physical keys or access cards, user accounts and associated rights and privileges, encryption keys).” (my emphasis added)

3.      In other words, someone with physical or electronic access to a server that contains BCSI, who does not also have access to the decryption key, does not have provisioned access to the BCSI. By the same token, someone with access to both the server that stores the BCSI and the key does have provisioned access even if the BCSI is still encrypted. This is because the person could decrypt the data if they wanted to.

4.      Modifying the wording of CIP-011-3 R1.1 and R1.2 to separate identification of BCSI (in R1.1) from protection of BCSI (in R1.2).

5.      Modifying the Methods section of R1.2 to add a new category of BCSI called “off-premise BCSI”, and to make it clear that encryption is an option for protecting that new category (this was also the first time that encryption was mentioned in the CIP standards, as well as the first time that use of the cloud for any purpose was mentioned).  

The two revised standards, CIP-004-7 and CIP-011-3, including the changes described above, came into effect on January 1, 2024. Since the reason for making these changes was to enable storage and use of BCSI in the cloud, and since use by SaaS applications was the primary intended use for BCSI in the cloud, I and some others in the NERC community thought that NERC entities would be happy that they could finally move computationally intensive data analysis tasks, that required some use of BCSI, to the cloud.

Even more importantly, I thought that vendors of the software that enables those tasks would be even happier. After all, instead of having to individually support each of their customers using their software with on-premises hardware, they could become a SaaS provider. Thus, they would just need to maintain one big instance of the software[ii] for all their customers.

However, what happened was quite different from what I expected: After being told for years that storing or using BCSI in the cloud was verboten, few NERC entities were ready to take the leap to using BCSI in a SaaS application – absent strong encouragement from NERC and/or their own Region(s). But that encouragement, strong or otherwise, was as far as I know totally absent. For example, even though some previous changes to the CIP standards have been accompanied by multiple NERC webinars and presentations at Regional Entity meetings, I have yet to hear of one of these happening that dealt with the two revised standards that came into effect on 1/1/2024.

But there is another explanation for why NERC entities have been reluctant to start using SaaS that requires use of BCSI: Late in 2023, some NERC Regional Entity staff members realized that there was a potential “showstopper” problem regarding the wording of CIP-004-7 R6. I described that problem in detail in this post in January 2024, but here is a quick summary:

1.      BCSI must be encrypted from the moment it is transmitted to the cloud. It needs to remain encrypted through when it is stored in the cloud and utilized in a SaaS application.

2.      Few SaaS applications can make use of encrypted data. Therefore, some person who is an employee or contractor of the SaaS provider will need to decrypt the BCSI and “feed it in” to the application. That person will need to have access to both the BCSI and the decryption key. At first glance, that appears to meet the “definition” of provisioned access included in the first section of R6: “..an individual has both the ability to obtain and use BCSI.”

3.      Requirement Part CIP-004-7 R6.1 makes it clear that provisioned access must be authorized by the NERC entity. Therefore, if Staff Member Y of the SaaS provider needs to feed BCSI from Electric Utility ABC into the application, ABC will have to authorize the provider to provision Y with access to their BCSI. Similarly, if Utility 123 needs to have their BCSI fed into the same SaaS application, they will also need to authorize provisioning of the staff member that does this, even if that staff member is also Y.

4.      Since SaaS is used day and night and since staff members get sick, take vacations and are transferred, there will always need to be multiple people with provisioned access to the BCSI of each NERC entity customer. If the provider has 100 NERC entity customers and at any time there are six staff members who need access to each customer’s BCSI (corresponding to three 8-hour weekday shifts and three 8-hour weekend shifts), that means there will need to be up to 600 individuals with provisioned access to BCSI at any one time.

5.      For each of these individuals, the supplier will need to provide evidence to each NERC entity customer that they are in continual compliance with CIP-004-7 Requirement R6 Parts R6.1, R6.2 and R6.3. Needless to say, this will require a huge amount of paperwork on the part of the SaaS provider; many (if not most) SaaS providers will be unwilling to undertake this responsibility. Therefore, it isn’t surprising that no NERC entity I know of decided to take the leap to using SaaS with BCSI after 1/1/2024.[iii]

Fortunately, help is on the way with this problem. A group I am (a small) part of, the informal NERC Cloud Technology Advisory Group (CTAG), discussed the above problem at length and realized that the BCSI access required for a SaaS provider staff member does not need to be explicitly provisioned; therefore, Requirement CIP-004-7 R6 does not apply in the use case described (which is fundamental to almost all SaaS use, of course).

However, knowing this might not in itself be helpful. This might be simply filed away under the “useless, but  still nice to know” category, if the only way it could be used to change the situation would be by changing one of the NERC CIP requirements (presumably CIP-004-7 R6). I say this because changing an existing CIP requirement could very easily require three years or even longer, starting with the day the change is first proposed to the NERC Standards Committee (in a Standards Authorization Request or SAR).

However, no change to a NERC CIP requirement (or definition) is needed. The CIP auditors (who are all staff members of one of the six NERC Regional Entities) occasionally produce “CMEP[iv] Practice Guides” that provide direction to audit staff on “approaches to carry out compliance monitoring and enforcement activities.” Our group turned over our findings – which are in part based on collateral NERC documents – to the committee that is in charge of drafting new CMEP Practice Guides. While it is not guaranteed that will be the outcome, we are optimistic they will develop a new Guide for BCSI that will make a recommendation on this issue (the last Guide for BCSI was published in 2019; it is based on the previous version of CIP-004, so it is now obsolete).  

Of course, it may be six months (or even longer) before a new CMEP Practice Guide is published. Does this mean that software vendors with current CIP customers should wait six months before they start offering SaaS services to those customers? Or that current SaaS providers need to wait six months before they start approaching NERC entities that are subject to CIP compliance about using their applications? Or that NERC entities that would like to move certain data intensive applications to the cloud should wait six months before they even start talking to SaaS providers about doing that?

In all these cases, the answer is no. The important thing to remember is that the concerns that were raised in late 2023 about provisioned access being necessary for SaaS application provider staff members were in retrospect overblown. The “default position” should always have been that provisioned access was not necessary.

Tom 6/19: Kevin also pointed out a recent development that I didn't know about. Without going into a lot of detail, it seems the NERC Regional auditors, who would need to prepare a CMEP Practice Guide, don't think a new one is needed in this case. In effect, they say that statements in a document that NERC endorsed as Implementation Guidance for CIP-004-7 Requirement R6 and CIP-011-3 Requirements R1 in December 2023 sufficiently undercut the concerns that were being raised about the wording regarding "provisioned access" in the first part of CIP-004-7 Requirement R6. 

Moreover, they said the question is more properly dealt with by the NERC Responsible Entity in the Information Protection Program that is mandated by CIP-011. In other words, they agree that the problem I described above shouldn't be a concern in most cases, but they also don't think a CMEP Practice Guide is required to explain this. Auditors don't like to waste time, especially when it's their own!

Thus, the new CMEP Practice Guide will do nothing more than restore the status quo before the concerns arose. It does not in any way amount to a change in how CIP requirements are normally interpreted. After all, had the concerns been correct, it would effectively have meant that use of BCSI with SaaS was still completely off limits for NERC entities with high and/or medium impact BES Cyber Systems, and the work of the drafting team from 2019 to 2023 was all for naught. 

My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or charge a subscription fee or both. However, neither of those options appeals to me. It would be great if everyone who appreciates my posts could donate a $20-$25 “subscription fee” once a year (of course, I welcome larger amounts as well!). Will you do that today?

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] The NERC definition of BCSI is “Information about the BES Cyber System that could be used to gain unauthorized access or pose a security threat to the BES Cyber System. BES Cyber System Information does not include individual pieces of information that by themselves do not pose a threat or could not be used to allow unauthorized access to BES Cyber Systems, such as, but not limited to, device names, individual IP addresses without context, ESP names, or policy statements. Examples of BES Cyber System Information may include, but are not limited to, security procedures or security information about BES Cyber Systems, Physical Access Control Systems, and Electronic Access Control or Monitoring Systems that is not publicly available and could be used to allow unauthorized access or unauthorized distribution; collections of network addresses; and network topology of the BES Cyber System.”

[ii] Of course, in practice I’m sure that SaaS providers maintain multiple instances of their software in the cloud, for various reasons. But that is still a big improvement over maintaining a single instance for each customer, as they did before they moved to SaaS delivery of their product.

However, I’m deliberately overlooking the “multi-tenant problem”. This arises when a standalone enterprise software product that includes its own database is moved without modification to the cloud – with the result that users from different organizations and even different countries might end up sharing a single database. Even though there are protections between different users in the database, they are not likely to be equivalent to the protections that exist when each organization in each country operates its own database instance. I hope to address this topic soon.

[iii] While this sentence is accurate, it’s misleading. The fact is, there are at least one or two SaaS applications that NERC entities have been using to document CIP-010 R1 (configuration management) compliance since 2016 or 2017; of course, configuration data on BES Cyber Systems will almost certainly include BCSI. It is likely those NERC entities are still using those SaaS applications today.

[iv] CMEP stands for Compliance Monitoring and Enforcement Program.

Saturday, June 14, 2025

Will NERC have to audit the CSPs?

 

The NERC Standards Drafting Team that is working on what I call the “cloud CIP” problem seems to be making progress on CIP-016. This is a new standard that will include requirements that “apply” to the cloud service providers (CSPs) – although compliance responsibility will fall on the NERC entity, of course. When CIP-016 comes into effect (which I continue to believe will be 2031, give or take a year), most of the existing CIP standards will be changed in some way as well.

I believe we’re at least 4-5 years away from having all the required changes to the CIP standards (and also to the NERC Rules of Procedure – see below) drafted, balloted (multiple times) and approved by NERC and FERC. Thus, it’s too early to be overly concerned about the details of the new CIP requirements that are being discussed today. However, I’m pleased to see that the SDT is at least starting to debate draft requirements, since doing that will lead them into confronting the big issues they will need to settle before they can even think of drafting the final requirements.

One of the biggest of those issues is that of requirements that apply to the CSPs. In this regard, the SDT is facing a situation almost exactly like the one faced by the SDT that drafted what became CIP-013-1. That SDT knew that the new standard should require good cyber behavior on the part of third-party suppliers of BES Cyber Systems, but at the same time they knew that neither NERC nor FERC has any jurisdiction over those suppliers; any new requirements would have to apply to the NERC entities themselves.

However, they also knew that FERC had made clear in their order that the new standard couldn’t dictate contract terms to NERC entities. So, how were they going to require the entities to ensure their suppliers put in place adequate cybersecurity protections?

FERC had said they didn’t want NERC to develop “one size fits all” requirements that take no account of the individual situation of either the NERC entity or the supplier. While FERC didn’t explicitly use the word “risk-based”, they were clearly asking NERC to develop risk-based[i] requirements.

This is the course the CIP-013 SDT took; in fact, they took it to a fault. CIP-013-1 R1 Part R1.1 required the NERC entity to develop a “supply chain cyber security risk management plan” (SCCSRMP, although that acronym never caught on) that included “process(es) for the procurement of BES Cyber Systems to identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services resulting from…procuring and installing vendor equipment and software…”

In other words, CIP-013-1 left it to the NERC entity to “identify and assess” risks posed by each supplier of BCS; left unsaid, but certainly intended, was the implicit requirement to work with the supplier to remediate any risks revealed by the entity’s assessment (e.g., in the supplier’s answers to the questions in a questionnaire).

However, one of the big problems with CIP-013 was that R1.1 didn’t provide any suggestion of what areas of supplier risk might be addressed in the SCCSRMP - the risk of compromise due to improper identity and access management controls, the risk of compromise due to inadequate remote access security, etc. As a result, I hear that a large percentage of NERC entities (with high or medium impact BES environments) simply considered the six items in CIP-013-1 Requirement R1 Part R1.2 to be indicators of the only risks that needed to be addressed in CIP-013; while those six items certainly address real risks, they were never intended to be the only ones the entity should be concerned about.

By contrast, the Cloud CIP SDT seems to be writing requirements for the CSPs directly. Of course, they understand that compliance with the requirements needs to be the responsibility of the NERC entity; however, it won’t be hard to rewrite the requirements so that the entity is responsible for making sure their CSP follows them. It seems that the requirements they’re developing now aren’t risk-based, but they are objective-based (which is NERC’s preferred term). Since you can’t achieve an objective without taking account of risk, I consider the two terms to be roughly equivalent.

The requirements seem to be written under the assumption that each NERC entity will need to negotiate with its CSP (by which I mean their platform CSP – i.e., one of the big boys) regarding what evidence they will provide to the entity come audit time. However, it’s highly unlikely that the platform CSPs will be willing to negotiate with individual NERC entities. After all, their business model is based on offering the same hamburger to every customer, not having a discussion with each one about what they want on it.

On the other hand, if the CSPs are going to have to “comply” with the requirements in CIP-016, there will need to be some compliance assessment process for them. It probably won’t be a true audit, but it will be a review of evidence of compliance with each requirement. However, it’s very likely that each platform CSP will demand to be audited by just one organization, not 100.

This is why I’ve already said that I see no alternative to having NERC (or a third party engaged by NERC) conduct an audit of each CSP on behalf of all NERC entities. Of course, the audit will just cover whatever CIP standard(s) specifically targets CSPs (the current CIP standards will hopefully survive virtually unchanged, but for on premises systems only). NERC will gather all the evidence from each CSP and make sure it’s complete and relevant, but they won’t pass judgment on whether the CSP is in compliance with each requirement.

Instead, NERC will pass the evidence from each CSP to entities who utilize that CSP for their medium or high impact BES systems. It will be up to each NERC entity to determine whether their CSP has complied with each of the requirements; if they determine their CSP has not complied with more than a few requirements, it will be up to them to decide whether, and under what conditions, they will continue to utilize that CSP.

For example, if a CSP has multiple deficiencies, the entity will need to decide whether to a) switch to another CSP, b) continue with this CSP but try to work with them to mitigate the deficiencies, or c) ignore the deficiencies entirely and keep utilizing the CSP. All three of these options are acceptable courses of action for a NERC entity, but they will need to justify their decision to the CIP auditors.

Most importantly, NERC will not issue (or deny) a certification for a CSP based on their audit results. If NERC did that, it would most likely be a violation of antitrust law. More importantly, the decision whether to use, or to continue using, a CSP will always be subject to many factors that are specific to the NERC entity. There is no way that NERC or any other organization could make the decision for a NERC entity whether to contract with a particular CSP.

Therefore, I think it is very likely that the SDT will conclude at some point (although it might take them 1-2 years to get there) that NERC will have to conduct the audits of the platform CSPs. However, there’s one huge fly in this ointment: This whole process is almost certainly not allowed (either explicitly or implicitly) by the current NERC Rules of Procedure. This means the RoP will need to be revised before the new or revised Cloud CIP standards come into effect.

What’s the process for changing the Rules of Procedure? If there is a defined process, it must be in the Rules of Procedure now. And if there isn’t a defined process, it will likely have to be drafted and approved (by both NERC and FERC) and inserted in the RoP; only then will it be possible to follow the new process and make whatever changes are required to permit NERC to audit the CSPs.  Of course, that change will also have to be approved by both NERC and FERC.

All of this is to say that granting NERC the authority to audit the CSPs will most likely require multiple years (two at a minimum, but perhaps three to four); this is why I think that 2031 is, if anything, an overly optimistic estimate of when the Cloud CIP standards will be enforced.[ii]

Since it’s likely that the Rules of Procedure changes will need to be dealt with by some other group within NERC (such as an RoP drafting team?), it would speed up the whole process if the RoP changes were pursued at the same time as the changes in the CIP standards. However, I don’t believe anyone is even discussing RoP changes now, so we can’t count on that happening. This is why I continue to believe that, barring a special intervention by the NERC Board, it will be 5-6 years at least before the “Cloud CIP” standards are implemented.

My blog is more popular than ever, but I need more than popularity to keep it going. I’ve been told I should either accept advertising or charge a subscription fee, or both. However, neither of those options appeals to me. It would be great if everyone who appreciates my posts could donate a $20-$25 “subscription fee” once a year (of course, I welcome larger amounts as well!). Will you do that today?

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] NERC’s term for risk-based is “objectives based”. I think they’re effectively the same thing, since it’s impossible to achieve any objective without taking risk into account.

[ii] This includes an estimate of at least one year for FERC to approve the new or revised standards, plus an implementation period of more than one year. These are in addition to at least two years for the SDT to draft, ballot, respond to comments, and revise the standards; that whole cycle will most likely need to be repeated three times after the first ballot, as it has with all major changes in CIP in the past. At a bare minimum, each cycle will take three months.

I will point out that there is some likelihood that pressure will build on NERC to exercise an “in case of emergency, break glass” provision now included in the Rules of Procedure. This allows the Board of Trustees, in an emergency, to order an expedited process to develop new standard(s) that will bypass the normal process. Since there’s currently not even any discussion about doing this, it’s safe to say that even this scenario will result in multiple years passing before full cloud use by NERC entities for their OT environments is permitted by the CIP Reliability Standards.