Tuesday, April 23, 2024

NERC CIP: My podcast on CIP and the cloud


Industrial Defender recently contacted me about doing a second podcast (the first was a couple of years ago) on a NERC CIP topic of my choosing. I jumped at the chance, since I consider the fact that NERC entities with medium and high impact CIP environments are in essence “forbidden” to utilize the cloud for some of their most important reliability and security workloads to be the biggest NERC CIP-related problem facing the power industry today.

Moreover, I have heard from multiple knowledgeable people in the NERC Regions that this problem is rapidly getting worse and that, if nothing is done about it in the next 2-3 years, there will likely be negative impacts to the security and reliability of the Bulk Electric System – due to the increasing number of software and services vendors that have announced they will soon only support cloud users.

The podcast was just posted, including a complete (and accurate) transcription of my conversation with Ray Lapena of ID. I’d like to hear any comments you have about his podcast.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Friday, April 19, 2024

Would you like to help figure out the best path(s) forward on vulnerability databases?

Many organizations in the software supply chain security community have assumed for years that the National Vulnerability Database (NVD), despite its various problems, is the de facto international standard for vulnerability databases. They also believe it can be relied on going forward to be the “bread and butter” database that meets most of the needs for most of the organizations involved with the community. However, the seeming inability of the NVD to fulfill that role since mid-February 2024, and the fact that there hasn’t even been an attempt to explain what the problem is, have made it clear this is no longer a good assumption.

In the wake of this event, there are three questions that need to be answered. The Vulnerability Database Project of the OWASP SBOM Forum proposes to develop answers to these three questions, based on discussions that will be open to all parties concerned with software supply chain security in general and vulnerability management in particular. If you would like to participate in weekly discussions and help create a document addressing the first two of these questions, and/or if your organization can support this effort through a donation to OWASP (a 501(c)(3) non-profit corporation), please drop me an email at tom@tomalrich.com.

The first question is, “What options are available for NVD users, both to replace services they have been counting on from the NVD and to go beyond what the NVD has traditionally offered?” There are many other vulnerability databases available, both free and for charge. These provide one or more services that the NVD has provided, but also go beyond the NVD in various ways. Questions include:

1.        What are these other databases?

2.        How do their offerings map to what the NVD has been offering?

3.        What are their offerings that go beyond the NVD?

4.        In what ways do they differ from the NVD, for example in vulnerability identifiers supported (CVE, OSV, GitHub Security Advisories, etc.), software identifiers supported (CPE, purl, or other), and types of products supported (open source software projects, proprietary products, intelligent devices – as well as sub-categories of these)?

5.        Given that most of these alternative databases do not cover the entire range of what the NVD covers, how can NVD users “mix and match” the different offerings so that, depending on their individual needs, they end up with at least the same level of functionality they previously received from the NVD and hopefully a lot more? And without ending up with a hopeless mishmash of incompatible vulnerability data?

6.        Given that the CVE.org database is the original source of most of the data in the NVD and that its infrastructure is much more robust and modern than the NVD’s, how hard would it be for current NVD users to switch over to using CVE.org as their primary vulnerability database – as one major NVD user has recently done? What could be added to CVE.org to facilitate this switch, such as a more end-user-friendly front end? What would be the advantages of using CVE.org over the NVD, including much-sooner support for purl and the fact that the originators of all CVE data – the 300+ CVE Numbering Authorities (CNAs) – are part of CVE.org? 

The second question is, “What steps should the US government take with respect to this problem? These might include:

1.        Doing nothing and hoping the NVD has a miraculous recovery.

2.        Actively investing in the NVD’s infrastructure, which will probably require a complete rebuild from scratch.

3.        In place of the current situation, in which the NVD is the primary vulnerability database and CVE.org is just an “alternate data provider (ADP)” to the NVD, turn this situation around so that CVE.org is the primary database and the NVD is an ADP.

4.        Get out of the vulnerability database business and leave that to the private sector, while maintaining CVE.org as by far the leading provider of vulnerability data – including investing heavily in the CNAs, given their irreplaceable role in the vulnerability identification process worldwide. 

The third question is, “What is the best long-term solution to the vulnerability database problem worldwide?" While there can be many views, Tom believes the following are “self-evident truths” (with apologies to Thomas Jefferson):

1.        Requiring a single uber vulnerability database (“One Database to Rule Them All”) that will somehow gather, harmonize and synchronize data from all other databases is a concept whose time has come and gone. There are many vulnerability databases operated in different ways by different organizations. Let them all continue to operate as they always have. Instead, there needs to be an AI-powered central “switching hub”, which might be called the Global Vulnerability Database (GVD). Queries to the GVD could use any major type of software and vulnerability identifier; the hub would route each query to the most appropriate database or databases and route the response(s) back to the end user. It would also harmonize the responses when needed.

2.        Of course, the GVD needs to be a truly global effort. It cannot be under the control of any single government or private sector organization, although all governments and organizations will be welcome to contribute to it (Tom believes that raising funds to create and maintain this “database” won’t be hard at all, given that nobody but US taxpayers is currently allowed to contribute to the NVD. It isn’t at all surprising that the NVD is chronically underfunded, despite being used worldwide).

3.        Developing the GVD will require a nonprofit organization to manage the process. When (and if) the GVD is running smoothly, operation of the database might be turned over to an organization like the Internet Assigned Numbers Authority (IANA), which manages IP addresses and DNS. Otherwise, the nonprofit organization would continue to operate the GVD in perpetuity.

The SBOM Forum Vulnerability Database Project will initially focus on the first two questions. The group will collaborate on one or more documents to answer these questions. Rather than wait until the documents are complete, the group will publish a current draft every two months, to maintain interest in the project among the software security community and to invite feedback on the work so far.

When the first two questions are answered and the results have been published, the group can start work on the third question. Since the end result of that effort might be a workable design for the GVD, that effort could easily take multiple years.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


Wednesday, April 17, 2024

Everything you always wanted to know about VEX (and TEA), but were afraid to ask


Two weeks ago, Steve Springett (leader of the OWASP CycloneDX and Dependency Track projects, and recently elected OWASP board member) and I recorded a podcast with Deb Radcliff, whose podcasts are widely followed in the software development community and are sponsored by CodeSecure. The podcast is called “VEXing SBOMs”, and you can find it here. Briefly, here are the main topics that we covered:

1.      We discussed use cases for SBOM and VEX.

2.      Steve discussed how SBOMs have become a natural part of the build pipeline.

3.      I pointed out that IMHO the number one reason why SBOMs are not being distributed to and used by software end users (i.e., the 99.9% - or so - of public and private organizations worldwide whose primary business is not software development) is the fact that there are currently no strict specifications for VEX on the two original VEX “platforms”: Common Security Advisory Framework (CSAF) and CycloneDX.

4.      I also noted that Anthony Harrison of the OWASP SBOM Forum has recently remedied that problem. This is a key step toward the goal that the SBOM Forum hopes to achieve before the end of 2024: starting a proof of concept in which end users benefit from the “full stack” of software component vulnerability management, namely utilization of SBOM and VEX to allow end users to learn about exploitable component vulnerabilities in their software, and ultimately to be able to quickly answer the question, “Where on our network are we vulnerable to (insert name of “celebrity vulnerability” du jour)?” You can read more about the proof of concept in Part 3 of my book (see below).

5.      Steve described the OWASP Transparency Exchange API project, which is described in this draft document. In my opinion, this will be the key enabler of distribution and use of SBOMs and VEX documents.

Thanks for inviting us, Deb!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

Monday, April 15, 2024

Two months and counting

I’ve written a number of posts lately on the problems with the National Vulnerability Database (NVD); this one was the first. Briefly speaking, around the middle of February, the NVD greatly slowed the rate at which it incorporated new CVEs into the database (CVEs originate in the CVE.org database, which is run by the Department of Homeland Security. The NVD is run by NIST, which is part of the Department of Commerce).

In addition, the small number of new CVEs that have appeared in the NVD since mid-February don’t have CPE names with them (CPE is the only software identifier supported by the NVD). A CVE report without a CPE name on it is about as useful as a car without a steering wheel, since the whole point of a CVE report is to identify the product(s) that is affected by the vulnerability (i.e., the CVE). While CPE has a fixed specification and CPE names could in theory be generated automatically, the NIST staff members that run the NVD feel compelled to create each CPE name manually.

However, it seems they’re not doing that very well, either. See the graph below, which was created last week by Patrick Garrity of VulnCheck. The X axis labels are very small, but each day of 2024 is a datapoint. On February 12, the “(CVEs) Analyzed” line (in green) flatlined. It has remained at an almost constant value since then, meaning almost no new CVEs have been analyzed in two months; since the NVD staff members only create a CPE name to go with a CVE when they “analyze” the CVE, this means that virtually no useful CVE reports (i.e., reports that link a CVE with one or more CPE names) have been added to the NVD since February 12.


                               

.

Of course, this has not been due to a lack of new CVE reports coming from CVE.org. The red “(CVEs) Awaiting Analysis” line has steadily climbed since February 12. In other words, since February 12, new CVEs have appeared at their normal pace, but almost no new CVE reports have been analyzed by the NVD staff, meaning they still do not have CPE names.

What happened to cause this problem? NIST has put up about four or five notices since late February, the latest of which is this one. It has no explanation, of course, even though that’s been promised a couple of times. However, sometimes actions (or non-actions, in this case) speak much louder than words. Here is what I think NIST is really telling us:

1.      We still don’t fully understand what happened on Feb. 12. However, it wasn’t any sudden increase in new CVEs to analyze, any sudden decrease in staff, any sudden loss of funding, etc. The NVD has always been understaffed and underfunded, and new CVEs have increased most years.

2.      No matter what the cause of the problem (other than a direct nuclear strike), we would have been up and running within minutes of the event – if our infrastructure weren’t two decades old. Any important modern database is fully redundant, but we have always had single points of failure. Clearly one or more of these failed.

3.      Ironically, all of the data in the NVD is also in CVE.org, which utilizes a modern, fully-redundant database infrastructure. Why don’t we switch all queries to CVE.org, you ask? We refer you to Tom’s earlier statement: CVE.org is part of DHS, while we are part of the Department of Commerce. Maybe the two Secretaries will meet to work this out. And maybe Israel will sit down and have a good talk with Iran. But don’t count on either of these happening anytime soon.

4.      We would like to tell you that we’re working on the problem, but how can we do that, since we still don’t understand it? Instead, we’re going to tell you about an idea we discussed with the OWASP SBOM Forum a year ago, but never followed up on: a “consortium” of private companies that will help us fix our problems. That will take 9-12 months at a minimum to put into place, and even theno, it’s not clear what this group could do to fix our ancient infrastructure. But we have to point to something that we’re going to do, rather than just say we’ll continue to run from crisis to crisis. But that’s the most likely outcome.

5.      Have a nice day!

To sum up, we’re two months into the NVD’s problem, and we still don’t have even a partial explanation of the problem, let alone a full one. And we definitely don’t have a solution!

What’s the next step, both for your organization and the US government? The next step is to figure out what the options are for the next step. The OWASP SBOM Forum is assembling a group to do exactly that, and expects the group to start meeting soon. Let me know if you’d like to participate in that, by contributing your time, your organization’s money, or both (participation does not require a contribution).

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

Thursday, April 11, 2024

It’s time to figure out this whole vulnerability database problem

Tom’s note: I sent out the notice below to the members of the OWASP SBOM Forum and it’s generated a lot of interest. It seems people agree with me (there’s a first for everything, I suppose!) that there are just too many threads to be pulled for one person, or just a small group of people, to figure out the best strategy (or strategies, more likely) for moving forward on this issue. If you’re interested in this, please let me know.

OWASP Foundation Vulnerability Database Project

Since mid-February, the amount of usable vulnerability data added to the National Vulnerability Database (NVD) has significantly declined compared to its previous average levels. This occurred without prior warning and has not yet been explained. While the level of production seems to be gradually increasing, NIST (which operates the NVD) has not estimated when it will return to normal levels.

Moreover, NIST has not announced any measures to prevent whatever caused the problem from recurring, other than describing their desire to form a “Consortium” of private sector organizations willing to help NIST fix the NVD’s problems. Since the Consortium will take at a minimum 6-9 months to put in place (and someone with more experience with government than I told me this week that, given what has to be done, three years might be a better estimate), and since it is unclear how the Consortium might be able to assist the NVD when it is in place, the Consortium is unlikely to have much of an impact on the NVD’s problems in the short or intermediate terms.

While everyone in the software security community hopes the NVD will be able to fix its problems, it is evident the community cannot count on the NVD being a dependable source of new vulnerability data going forward - although there is no danger that the data currently in the NVD will ever become unavailable. It is time to explore all options for providing dependable and comprehensive vulnerability data to users in the US and worldwide in the short, intermediate, and long terms.   

Fortunately, today there are multiple good vulnerability database options available in both the private and public sectors. These include CVE.org, the database operated by the Department of Homeland Security. This is the source of all CVE data (used in the NVD and other databases), and is based on a modern, fully redundant infrastructure, unlike the NVD’s – although it currently lacks the user interface of the NVD. At least one large software developer has switched to using CVE.org for most of the data it previously retrieved from the NVD.

The options also include multiple privately-run databases of open source software vulnerabilities, as well as databases that include all NVD data, along with enhancements not found in the NVD.

However, the wealth of vulnerability database options also poses a challenge, since it is hard to compare the databases. This is because they address different types of software (and devices), use different identifiers for that software, list different identifiers for vulnerabilities, are at different levels of maturity, have different relationships with data sources, etc.

Even more importantly, the characteristics of the different databases are not set in stone, and some are more adaptable than others. For example, both the NVD and CVE.org databases currently identify all software and intelligent devices using “CPE names”. In 2022, the OWASP SBOM Forum described the many problems with CPE names and the superiority of purl (Product URL) identifiers for open source software, in this white paper. CVE.org currently accepts (on a trial basis) the “CVE JSON 5.1 specification”. That spec (thanks to a pull request submitted in early 2022 by Tony Turner of the SBOM Forum) makes it possible for CVE.org to utilize purl identifiers, once those are added to CVE reports by the CVE Numbering Authorities (CNAs). However, it will be at least 2-3 years before the NVD supports the 5.1 spec, and thus will be able to accept purl.  

The OWASP SBOM Forum believes it is important now to examine the near- and immediate-term vulnerability database options that are available, both to end user organizations (anywhere in the world) and to government agencies. There are two main reasons for doing this:

1.      Users of software vulnerability data need to determine what are their best options for obtaining the current vulnerability data they need, and the advantages and disadvantages of each option. Some users may decide to utilize multiple vulnerability databases, not just one, and thus will want to know what each option provides them.

2.      The US government needs to decide their best options for allocating their investments for vulnerability reporting - if there even will be more investment. Investing heavily in a database with an out-of-date infrastructure would not be a good idea, assuming there are more up-to-date options.

Rather than simply having a small number of people write a white paper, the SBOM Forum wishes to establish a working group that is open to all interested parties in any country. It can include end user organizations, software developers, operators of public and private vulnerability databases, individuals who work for government agencies, vendors of vulnerability management tools, and more. The group will probably hold bi-weekly meetings in the morning US time, to allow as much European participation as possible. However, since the document(s) will be developed cooperatively, anyone will be able to participate in drafting them, regardless of their time zone.

Because OWASP SBOM Forum members will need to devote a significant amount of time to this project, they will need to receive some compensation. Since the OWASP Foundation is a 501(c)(3) nonprofit corporation (a type of nonprofit to which donations are often tax deductible), and since donations to the OWASP Foundation that are over $1,000 can be directed to an OWASP project such as the SBOM Forum, we are requesting that organizations or individuals that are concerned about having a robust software vulnerability management ecosystem contribute to this effort.[i]

You are free to donate any amount you would like, or not to donate at all. Any donation of $5,000 or more will be acknowledged with your logo on our website, assuming you would like to do that. Note that participation in this project does not require any donation.

If we receive sufficient donations and there is interest, the SBOM Forum will extend the project to consider the longer term. In this extension, the question will change from “What are the options in the near and intermediate terms?” to “What is the optimal global vulnerability database structure long term?”

It is close to certain that the optimal long term vulnerability database option is an international one, funded by both public and private organizations but not operated by a single government or for-profit organization. One model (or even a final home) for that option might be the Internet Assigned Numbers Authority (IANA), which operates DNS and performs other functions that support the global internet.

IANA (now part of ICANN) was originally operated by the National Technology and Information Administration (NTIA) of the US Department of Commerce, but is now internationally governed. The global vulnerability database would need to be “incubated” initially by a consortium of private- and public-sector organizations, just as DNS was incubated by the NTIA.

Should the long term project move forward, it is likely the group will consider the idea of not having a single database at all. Instead, there could simply be a federation of existing vulnerability databases, linked by an AI-powered “switching hub”. That hub would route user inquiries to the appropriate database or databases and return the results to the user. Using this approach would of course eliminate the substantial costs required to integrate multiple databases into one, and tot maintain that structure. It would also probably eliminate any need to “choose” between different vulnerability identifiers (e.g., CVE vs. OSV vs. GHSA, etc.) or different software identifiers (CPE vs. purl).

We hope your organization will decide to participate in this important project and will also consider donating to it. Please contact Tom Alrich at tom@tomalrich.com with any questions.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] Donations by credit card can be made online and directed to the OWASP SBOM Forum, by going to our OWASP site. While the process is straightforward, we request that you email Tom Alrich before donating. For non-credit card donations, please email Tom. Note that OWASP retains 10% of the donation for administrative purposes (although any tax deduction will apply to the entire donation). Given the amount of work that SBOM Forum members would have to do if we were running our own nonprofit organization, we consider this to be quite acceptable.

Wednesday, April 10, 2024

Are you sure this is “critical” infrastructure?

 

My friend Mike Barlow put up a great post on LinkedIn this week, which points out a huge irony regarding critical infrastructure (including most devices that run power substations, gas pipelines, oil refineries, etc.): While CISA and others are constantly advocating for use of “memory safe” programming languages for new software and firmware, most legacy devices (whether or not they’re for critical infrastructure) operate on definitely-non-memory-safe languages like C and C++.

Mike summarizes this situation quite succinctly: "…your exercise app is probably more secure than the code running at your local electric power station." Does that make you feel safe?

What’s there to be done about this? I dunno. Replacing all that equipment will be tremendously expensive, although obviously any replacement efforts should start with the most critical equipment. Perhaps baby monitors can be left ‘til the end, although I imagine that, being much newer than for example some electronic relays deployed in power substations, the baby monitors have much safer code than the relays.

This is a good example of “technical debt”. We – and probably the rest of the world, except countries with much newer infrastructure, perhaps due to having just come through a war – have a lot of such debt to pay. Of course, I doubt there’s a line anywhere in the federal budget about paying technical debt. As often happens, we’ll wait ‘til things start breaking down. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Saturday, April 6, 2024

NERC CIP: Is there a shortcut to the cloud?


As I pointed out in this post in January (and have many times in previous years), NERC entities with medium and/or high impact BES Cyber Systems, Electronic Access Control or Monitoring Systems (EACMS), and Physical Access Control Systems (PACS) can’t currently make full use of the cloud, unless they want to risk violating a number of CIP requirements literally every day of the year.

As more and more software and service providers (including security service providers) announce their software or services will only be delivered in the cloud in 1-2 years, there is real concern (including among NERC and Regional Entity staff members) that there could soon be significant impacts on both grid reliability and grid security. One major ISO has said they will need to lower their security rating in two years, due to their security service providers ceasing to offer an on-premises option.

While there will soon be a process underway to make all the changes to the CIP standards (and the NERC Rules of Procedure) that are needed to make use of the cloud fully “legal”, that process will take probably six years, and maybe longer than that. That process needs to continue, but it clearly won’t finish before the reliability and security impacts begin to be felt.

Why do we have this problem? It isn’t because the language of the current CIP requirements prohibits use of the cloud. Those requirements say nothing at all about the cloud. This is because they were originally drafted starting in 2008, when use of the cloud was considered quite risky by most NERC entities (and certainly by FERC). Of course, even today it’s doubtful that many organizations of any type think of the cloud as risk-free. However, the huge number of successful attacks against on-premises systems shows on-prem isn’t risk-free, either.

Instead, the reason why we say the current CIP standards prevent use of the cloud for medium and high impact systems is that the cloud service provider would never be able to provide the evidence the NERC entity needs to prove compliance with CIP-005 R1, CIP-007 R2, CIP-010 R1 and other current requirements. As everyone who is involved with NERC compliance knows, “If you don’t have evidence that you did it, you didn’t do it.”

Why is this the case? I’m going to let you in on a dirty little secret about CIP: There are more “implicit requirements” than “explicit requirements”. Explicit requirements are the ones listed in the standards, while implicit requirements are unwritten, but implied by explicit requirements. In other words, while a NERC entity can’t be cited for violating an implicit requirement, often performing an implicit requirement is a prerequisite for complying with an explicit requirement. If you don’t “comply” with the implicit requirement, you’ll be in violation of the explicit one.

One of the most important implicit requirements in CIP has to do with the fact that BES Cyber Systems (BCS) are defined simply as collections of one or more BES Cyber Assets (BCA). BCAs are physical devices you can point to, while BCS are just a function performed cooperatively by multiple devices. You can’t point to a BCS.

The problem arises because none of the CIP requirements today mentions BCAs, only BCS. For some of the requirements, like the ones for training, policies, etc., this is fine. However, think about CIP-007 R2 for patching. It applies to BCS, but do you apply a patch to a system or to a device? Of course, you patch the device. So, complying with CIP-007 R2 in fact requires that you first comply with the implicit requirement that says something like, “Everything required by each of the parts of CIP-007 R2 needs to be repeated for every device included in the BCS.”

Now, think of what the CSP would have to do to provide evidence of compliance with just CIP-007-6 R2.2, if you put a medium or high impact BCS in the cloud today. Every 35 days, they would need to:

1.      Inventory all the physical devices on which any portion of that BCS has resided during the last 35 days (of course, systems in the cloud are always spread across many servers and data centers, and they are moved around all the time. That’s how the cloud works). Each of those physical devices needs to be included in the inventory, even if just a small part of the BCS resided on it for just a few seconds.

2.      Inventory every piece of software that is installed on any of those devices (no matter which cloud customer the software belongs to) and inquire with the developer of the software whether they have released any patches in the past 35 days.

3.      For each of those patches, evaluate it for applicability to the system it’s part of (which will be impossible, since those systems are unlikely to be ones that your organization has anything to do with. They may be owned by entities in completely different industries, foreign nationals, etc.).

And remember, the CSP will need to perform all these actions and document that they did so, despite the fact that their normal patching procedures (or perhaps those of their customers, since most CSPs follow a “shared responsibility” model) may have already patched all of the software products identified in step 2. For prescriptive CIP requirements like CIP-007 R2, CIP-005 R1 and CIP-010 R1, the only evidence of compliance is evidence that the exact steps mandated by the requirement (and by any implicit requirements, as described above) were followed.

Do I need to go on? I didn’t think so. Remember, these three steps (which might need to be performed thousands or even tens of thousands of times) are just for one part of one requirement. Think of what the CSP would need to do, to provide evidence to a NERC entity to prove compliance with every part of every requirement for say 200 BCS for a three-year audit period! The CSP would have to provide many millions of pieces of evidence to the entity.

Thus, if you signed a contract to put just one BCS in the cloud and then got on a call with the CSP to explain what they need to do to gather the evidence you need, you would probably lose them as soon as you described the first step: I strongly doubt any CSP maintains a log of every device on which a piece of a single BCS might have resided over one week, let alone three years. The CSP would apologize for not properly understanding what you needed when you first signed the contract with them, but they would have to tell you that neither they nor any other CSP could ever provide you with compliance evidence for even a single BCS for one week. There’s simply no way they could do that, without completely breaking their business model.

However, recently it occurred to me that perhaps CIP-013-2 might offer a way to include cloud services within the scope of compliance, even for NERC entities with medium or high impact systems. Consider this:

1.      A NERC entity decides to implement a single medium impact BCS in the cloud. They do this by signing a contract with the CSP.

2.      CIP-013-2 R1.1 says the scope of R1 is “the procurement of BES Cyber Systems and their associated EACMS and PACS to identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services.” (my emphasis) In signing the contract, the entity isn’t procuring any hardware or software products from the CSP, but they’re certainly procuring services.

3.      Therefore, the relationship between the entity and the CSP falls into the scope of CIP-013. The entity should treat the CSP the same as they would treat any other service provider in scope for CIP-013.

Because the cloud service is one of the products and services that needs to be addressed in the NERC entity’s supply chain cybersecurity risk management plan, the entity will need to include it as one of the procured items in the plan. Just as they must do for all other procured products and services in scope, the entity needs to describe in the plan how they will “identify and assess cyber security risk(s) to the Bulk Electric System” arising from the cloud service.

What are these risks? At a minimum, they need to include the six risks described in R1.2.1 through R1.2.6.[i] But of course, there are lots of other risks that apply to cloud service providers. Rather than leave it up to each NERC entity to decide what those risks are, NERC will need to provide a list of types of cloud risks that must be addressed in the plan.

Since the CSPs will never permit every NERC entity to audit them, NERC could do an “audit” themselves. An important component of the audit might be reviewing the CSP’s FedRAMP authorization documentation to determine whether it meets a certain set of criteria established in advance. It would be up to the entity to decide whether to accept NERC’s audit in whole, in part, or not at all). Of course, there might be other steps in the CIP-013 cloud compliance process as well.

By following CIP-013, the NERC entity no longer needs evidence that the CSP has complied with requirements like CIP-007 R2, any more than they need evidence that other vendors addressed in CIP-013 (e.g. the vendor of relays used in substations) have complied with CIP-007 R2. However, the utility will need to provide evidence of the vendor’s compliance with CIP-004-7 Requirement Parts R3.4, R6.1, R6.2, and R6.3 (and perhaps one or two other Requirement Parts in CIP-004-7). Some CSPs may balk at providing this documentation, but given that the alternative (being required to provide reams of documentation that is literally impossible to produce) is much worse, they will hopefully agree this isn’t such a terrible fate.

Where’s the catch? For one thing, I’ll admit I’ve taken some liberties with the term “services”. There’s not much doubt that the CIP-013 drafting team never intended “services” to include cloud services – but they never defined the term, so there’s no way to know what they intended (moreover, the Rules of Procedure don’t provide any mechanism for considering the drafting team’s intentions as part of a CIP audit). I’ll also admit that the NERC “audit” of the CSPs, which I described above, would require changes to the Rules of Procedure, or perhaps some sort of temporary waiver. There will need to be some sort of intervention by someone at NERC (most likely the Board of Trustees) to smooth the path for this change.

But it’s important to keep the big picture in mind: Like it or not, within two or three years, if no change is made before then, the choice in 2-3 years will be between accepting a lower level of security and (perhaps) reliability for the power grid and allowing NERC entities with high and/or medium impact environments to utilize cloud-based software and services - while being in technical violation of a host of CIP requirements. Moreover, the latter option will be rushed into place due to a grid emergency, as opposed to having plenty of time now to carefully plan and implement the CIP-013 option.

You pays your money and you takes your choice.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] These six items are included in the standard because FERC mentioned each of them at various places (and in varying contexts) in Order 829 in 2016. They’re not there because the Standards Drafting Team (or FERC itself) considered them to be the most serious supply chain cybersecurity risks. The Responsible Entity is still required to examine risks and determine for themselves which ones are important enough to mitigate and which ones are not.