Wednesday, October 30, 2024

Sorry for the late notice…

I just learned that a NERC webinar that was originally scheduled for tomorrow, but which I thought had been postponed, will in fact take place. It is the fourth in a series sponsored by the NERC Cloud Technical Advisory Group (CTAG) and SANS. It will be at 1PM Eastern Time Thursday October 31 (yes, Halloween).

The webinar should be good. It will feature Maggy Powell, formerly with Exelon and now with AWS, and Mikhail Falkovich, formerly with ConEd and now with Microsoft. You can register for the webcast, as well as access the recordings of the previous three CTAG/SANS webinars, here. If you forget to register in advance, I believe you will still be let in, but it’s better to register!

Note this is just the first NERC webcast regarding the cloud this week. The all-day webcast I wrote about last week will still take place on Friday – in fact, both Thursday speakers will participate on Friday as well (I will, also).

 

 

 

 

 

 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Tuesday, October 29, 2024

The NVD’s problems deepen

This morning, one of the members of the OWASP SBOM Forum sent around an update to the group on how the National Vulnerability Database (NVD) is doing in their quest to reduce their current huge backlog of “unenriched” vulnerability records – namely, new CVE Records that don’t have any CPE identifier attached to them. Not having an attached CPE means that searching the NVD for a particular product will never identify any of those CVEs, even though the product might be vulnerable to one or more of them.

The only way to know for sure whether any of those CVEs affect that product is to manually search through the text of every unenriched CVE report. How many are there? I pointed out at the beginning of October that currently there’s a backlog of over 18,000 unenriched CVE records, which is over 2/3 of the new CVEs identified this year. Moreover, that backlog continues to grow.

Did the SBOM Forum member have progress to report? That depends on what you call “progress”. He had been hoping that on October 1, the first day of the federal government’s fiscal year, the NVD would begin a concerted effort actually to reduce their backlog. Alas, that was not to be. He reported:

Starting October 1st…CPE assignments have fallen off significantly as a percentage of new CVE assignments. Essentially, the backlog has increased by around 1,000 since the week of September 23rd…I was hoping NVD’s CPE assignment was going to essentially catch up. I was optimistic in September, but that is no longer the case. 

Thus, the NVD now faces a backlog of over 19,000 unenriched CVE records (after promising last May to reduce the backlog to zero by September 30). Will they ever turn this situation around? I have no idea, but I do know that it’s foolish to sustain ourselves in the hope that they’ll be able to do this. They have disappointed us at every step of the way, since their problems (still never officially explained) started on February 12.

Given that no vulnerability search on the NVD will yield accurate results unless you only care about vulnerabilities that were identified before 2024, what alternative vulnerability databases are there? There are several databases that are based on the NVD, which have conducted their own enrichment of some of the unenriched CVEs – i.e., they have created their own CPEs and added them to the CVE records in their database. However, it’s important to keep in mind that the CPE identifier (which is only used by the NVD and its derivatives) has a lot of problems; there’s no such thing as a “definitive” CPE, so a CPE created for one database won’t necessarily match one created for another (of course, this in itself shouldn’t be a big problem if you confine all of your searches to one of those databases).

If your primary concern is vulnerabilities in open source software, you’re in luck, since there are multiple good vulnerability databases to choose from for open source (including OSV, OSS Index, GitHub Security Advisories, and others).

What makes the open source vulnerability databases so good? They are literally all based on the purl identifier. Purl is highly reliable and most importantly doesn’t require lookup to a centralized list of identifiers, as CPE does. In other words, every open source product that is available in a package manager (the majority of those products) has an intrinsic purl that can be created by a user on their own, as long as they know the name of the package manager they downloaded the product from and its name and version number in the package manager. Since vulnerabilities are reported using the same purl, the user should always (barring error, of course) be able to learn of all vulnerabilities that apply to the product in a database search.

Most importantly, purls don’t need to be created by anybody, so the NVD’s current problems with not being able to create as many CPEs as are needed would never have happened if the NVD were based on purl.

However, there is currently a big downside to purl: It can only be used to identify open source software, not proprietary software; CPE can be used to identify both. Moreover, currently there is no alterative identifier for proprietary software, other than CPE. Thus, given that the NVD has fallen on hard times, it can truthfully be said that today there is no trustworthy way to conduct an automated search for vulnerabilities in proprietary software. Of course, for most organizations, automated searches are essential to an effective vulnerability management program.

This raises the question, “Can purl be somehow extended to cover proprietary software, and will it be almost as dependable in that domain as it is for open source software?” The answers to those two questions are yes and yes. Then the question arises, “What needs to be done to make this happen?”

I’m glad you asked. The OWASP SBOM Forum has identified two likely paths by which purl can be expanded to cover proprietary software. We want to flesh out the details of both of those paths and test them in a proof of concept. We have scoped out a project to do that and are looking both for volunteers to contribute to the project and modest financial support to make it happen (support can be in the form of donations to OWASP which are directed to the SBOM Forum. OWASP is a 501(c)(3) nonprofit organization).

You can read about the project here. Please email me if you would like to discuss this more.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, October 28, 2024

What’s the connection between AI growth and coal-fired power plants?


GridSecCon is the premiere power grid security conference and exhibition, which is sponsored annually by NERC and the E-ISAC. I have only missed two of the onsite events since the first one in 2011, and I was quite pleased to attend the 2024 event in Minneapolis last week. As usual, it was a very informative conference and a great opportunity to interact with a lot of people who are involved with ensuring the cyber and physical security of the North American power grid. I want to thank the corporate sponsors of the event, as well as NERC and the E-ISAC.

Without a doubt, the most memorable presentation during the week was the one by Sunny Wescott of CISA, where her title is ISD Chief Meteorologist. I doubt too many people who saw her presentation – which was to the entire conference in the morning of the first day – will disagree that it was one of the most powerful talks they have ever witnessed. She has given this presentation to multiple audiences (and will continue to, I’m sure). You can find several videos of her presentation from other venues on YouTube by searching on her name, but I also recommend you see her live if you ever get that opportunity.

Andy Bochman of Idaho National Laboratory wrote an excellent post on her presentation on LinkedIn, but my summary of what she said is, “We’re facing tremendous challenges due to climate change. They are coming at a faster pace, from a million different directions, than we ever imagined was possible. At this point, we can’t eliminate those challenges, but there’s a lot that we can do – especially on the local level – to prevent them from leading to unmitigated disaster.”

However, there was another very powerful talk during the conference; this one was by Andy. Since it was in a breakout session, it was only witnessed by a fraction of the number of people who saw Sunny’s presentation, but I know a lot of people considered his talk to be at least the second most powerful of the conference. I certainly did.

Andy summarized his talk (in the same LinkedIn post) as, “about the risk of suppliers putting generative AIs, prone to hallucinations and emergent behaviors in control centers, and I also extended the topic to address ultra-realistic AI-boosted disinformation including deepfakes that could spoof operators into taking harmful actions.” He didn’t say that AI should be banned from grid control systems altogether, but he did say we need to be very careful about deploying it on those systems.

A week ago, I would have said that a presentation on the impact of climate change and a talk on dangers posed by indiscriminate deployment of AI in grid control centers would both be interesting, but they wouldn’t have anything in common. However, I now realize the two topics are very closely linked.

The link between the topics became clear when someone mentioned to me something I hadn’t heard before: that the Coal Creek Station, a 1,000MW coal burning generating plant (which I visited 7 or 8 years ago) in the middle of wheat fields in North Dakota, was purchased by a data center provider in 2022 to power a new data center to be built nearby. Thus, the plant will most likely continue operations for decades to come

Like a lot of people, I had heard of a couple of deals in which output from a nuclear plant (or at least one unit of the plant) was committed to a data center provider – most notably, Microsoft’s signing of a 20-year power purchase agreement that will allow Constellation Energy to restart Unit 1 of the Three Mile Island nuclear plant in Pennsylvania (Unit 2 was shut down after the famous 1979 incident, but Unit 1 wasn’t affected by it).  Note that Unit 1 has 837MW capacity, which is less than Coal Creek’s capacity.

However, I was startled when I searched for more information on the Coal Creek deal and I found this article from Power Magazine. It doesn’t even mention Coal Creek, but it makes clear that coal-fired plants all over the US are getting a new lease on life for one main reason: The huge power needs of AI can’t be satisfied just by the rapid increase in renewable energy production. Not only must renewable energy increase, but fossil fuel production – especially coal – can’t decrease for the foreseeable future.

In other words, if coal plants continue to close (or be scheduled for closure) at the rate they have over the past decade, the North American grid clearly won’t be able to satisfy both normal power demand (which wasn’t growing quickly before the AI boom) and AI demand. As Power pointed out, coal-fired generation has a new lease on life (and that will inevitably be the case worldwide, not just in North America, although the article doesn’t mention that). While this is good news for people whose jobs depend on coal plants (and who might have to move and take a pay cut, if they want to work in renewable energy), it isn’t good news for the fight against climate change.

Is somebody doing something wrong here? After all, workers in the coal plants, like most of us, would like to keep working in a job we understand and can perform well. The data center operators want to obtain the power they need to fulfill the orders that are flooding in from tech companies. The tech companies like Microsoft are trying to keep ahead of their competitors – and today, that means going all in on AI. The public is already benefiting from AI in many ways; they would be quite reluctant to see those benefits stop growing if AI’s power use is somehow disfavored by the public and private organizations that operate and regulate the grid.

Nobody is doing anything wrong, yet at the same time - as Sunny Wescott’s presentation cogently demonstrated – we need to do everything we can to keep the rate of acceleration in climate change (i.e., the second derivative. We’re beyond being able to control the first derivative) from increasing any more than it already has. Are we all simply SOL, and will our grandkids ultimately end up needing to find another planet to live on?

I don’t think so, because I think there’s one link in this seeming circle of doom that can be broken: AI needs to figure out how to use much less energy than it currently uses, while not cutting back on the substantial benefits it is currently providing and will provide for society. This might be achievable by examining the fundamental assumptions on which what is currently called artificial intelligence is based.

I will do exactly that in a post that’s coming soon to a blog near you. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, October 24, 2024

NERC Cloud Services Technical Conference Nov. 1

 


NERC has for a long time wanted to have a technical conference to address compliance and security issues with use of the cloud by NERC entities. I’m pleased to announce that it’s now scheduled for next Friday, November 1 (sorry for the late notice. NERC will put out an official announcement soon, hopefully today). It will run from 10 AM to 5 PM Eastern Time. Registration is available here.

The agenda is very well-thought-out. It consists of four panels; I will participate in the third panel. The fourth panel is an update from the new “Risk management for third-party cloud services” drafting team. I’ve seen all of the questions that will be asked of all the panels, and I can assure you that all of the panels will be worth your time, if you’re available (of course, you can browse back and forth to the meeting, as your schedule permits).

I’ll hope to see you there!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, October 20, 2024

How can we (really) automate software vulnerability identification?

I probably don’t need to tell you that vulnerability management is important for any organization, public or private, that uses software. If you’re not convinced of this, all you need to do is look at devastating ransomware attacks like WannaCry, NotPetya and Ryuk. All of these exploited known vulnerabilities for which patches were available.

I also probably don’t need to tell you it is impossible to manage vulnerabilities that affect software you use, if you can’t learn about them using frequent, fully automated searches – in which you enter an identifier for a software product and version and immediately discover all recently identified vulnerabilities that affect that product and version.

Yet, that is the situation today: The most widely used vulnerability database in the world is the US National Vulnerability Database (NVD). However, because of the NVD’s currently huge backlog of “unenriched” CVE (vulnerability) records dating from February of this year, any search for vulnerabilities that apply to a particular software product and version will yield on average fewer than one third of the vulnerabilities that have been identified this year for that product and version. Even worse, the NVD provides no warning about this situation.

This is analogous to a doctor that stopped studying new diseases eight months ago and can only diagnose diseases that were identified before then – yet never warns his patients that they could possibly have contracted a disease he hasn’t yet learned about. In both cases, the end user/patient is more likely to be harmed due to not knowing about a vulnerability/disease, than to benefit from knowing about one that they face. Ignorance is not bliss.

However, the NVD’s biggest problem isn’t their current backlog, but the fact that the CPE (“common platform enumeration”) software identifier that is required for all vulnerability lookups in the NVD has many problems - and there is no good solution for them. These problems cause many searches to fail, without any explanation for the failure. Even worse, the user will usually not even be informed that the search has failed.

In 2022, the OWASP SBOM Forum (which I co-lead) published a white paper on the CPE problem in the NVD. The central argument of that paper was that the purl (product URL) software identifier is far superior to CPE, and that CVE.org (the agency of the Department of Homeland Security that oversees the CVE Program) and the NVD should move as quickly as possible toward supporting both purl and CPE. After writing that paper, we submitted a “pull request” to CVE.org to add purl support to CVE records. That request came into effect when the CVE 5.1 specification was approved earlier this year.

However, the 5.1 specification alone didn’t solve the problem. The CVE Numbering Authorities that create CVE records (i.e., report new vulnerabilities in software products, usually products developed by their own organization. For example, Microsoft, Oracle, Red Hat, Schneider Electric and HPE are all CNAs) need to start adding purls to those records, yet few if any have done so thus far. One reason for this is that, even if the CNAs started doing that, the purls would be “all dressed up with nowhere to go”, since neither the NVD nor the CVE.org database currently allows a search using purl.

But there’s an even bigger problem: While purl has literally conquered the world of open source software, it can only be used to identify a tiny percentage of proprietary software products with vulnerabilities today. This means a user of a proprietary software product cannot look that product up in the NVD using purl; instead, they must use CPE. Purl can never be on an equal footing with CPE until it can be used to identify proprietary software products, not just open-source products.

The OWASP SBOM Forum has decided this is an unacceptable situation, especially since purl eliminates most of the problems that affect CPE. We are asking, “What will it take to give purl the capability to identify proprietary (closed-source) software, as well as open-source?”

Fortunately, two very smart individuals are members of the Forum. One is Steve Springett, creator and leader of two of OWASP’s major projects: Dependency-Track (which performs over 20 million automated vulnerability lookups every day - although few of these use the NVD. In fact, D-T mainly uses Sonatype’s OSS Index, an open source vulnerability database that is based on purl) and CycloneDX. The other is Tony Turner, the cybersecurity expert and SANS instructor who co-leads the SBOM Forum with me, along with Jeff Williams of Contrast Security.

Both Steve and Tony are quite familiar with purl, since they are both part of the project team. In fact, in the “early days” of purl (which were less than ten years ago, believe it or not), Steve worked closely on the design with Philippe Ombredanne, the creator of purl (who is also a member of the SBOM Forum). When the SBOM Forum developed our paper in 2022, Steve described two ideas for how to expand purl to identify proprietary software.

Before I explain Steve’s ideas (one of which Tony came up with separately), I need to point out the most important feature of purl: It isn’t based on a centralized “namespace” like CPE is. CPE names are created by contractors who work for the NVD (which is part of NIST). Unless one of those contractors creates the CPE name, it isn’t valid[i].

If a CNA or software user wants to learn the CPE name for a software product, they must use a variety of methods to find it – fuzzy logic, generative AI, prayer, etc. There is a centralized “CPE database”, but it is simply a list of all the CPEs that have ever been created, without any contextual information. As Bruce Lowenthal of Oracle has pointed out, this would be like listing all the words in the Bible in alphabetical order and calling that an English dictionary.

By contrast, purl creates a decentralized namespace. Purl consists of a series of one-word types, which currently mostly refer to package managers for open-source software (e.g., the “maven” type refers to the Maven Central package manager). All you need to know about package managers now is that they’re a single web location from which you can download software, if you know the name of the product and its version string. Since a single product/version pair can never be replicated within the package manager, each pair is unique. Therefore, each package manager has a controlled namespace.

What’s more important is that the combination of three pieces of information – package manager (type), product name within the package manager, and version string - is guaranteed to be unique within the entire purl namespace (i.e. across all purl types). What’s even more important is that the user of the product doesn’t have to query a central database to find out the purl for their product. The user can create the purl on their own, using information they already have.

To create the unique purl, the user just needs to know the type (package manager), and the name and version string in that package manager. For example, the purl for version 1.11.1 of the Python package named “django” in the PyPI package manager is “pkg:pypi/django@1.11.1”.[ii]

Of course, even though the user can always re-create the correct purl for the product, that will only help them identify a vulnerability if the supplier reports vulnerabilities in that product/version to CVE.org[iii] using the same purl; that way, the purl the user enters in the vulnerability database will match the purl on the CVE record. This is how CPE is supposed to work, but since it’s impossible to know for certain what the NVD contractor actually created, there can never be any certainty regarding CPE.

For example, if the contractor used “Microsoft” as the vendor name, that CPE will be different than if they used “Microsoft, Inc.” If a user, who is trying to learn about vulnerabilities in a Microsoft product, creates a CPE according to the CPE specification, they will have to guess which of these is the vendor name used by the contractor, since they will be different CPEs.

What is worse is that if they guess wrong and search on the wrong CPE, they will simply be informed that “There are 0 matching records”. This is the same message they would receive if they had guessed correctly, but there are no vulnerabilities listed in the NVD that apply to that product/version (which might be interpreted to mean the product/version has a “perfect record”). There is no way for the user to learn which is the case.

With purl, as long as the user knows the package manager they downloaded the product from and the product’s name and version string in that package manager, they should always (barring a mistake) be able to create the same purl that the supplier used when they reported the vulnerability. This is why purl has literally conquered the open source software world. In that world, it would be difficult even to say there is a number two software identifier after purl.

Of course, the key to purl’s success is the existence of package managers in the open source world; it would be much more difficult to create a distributed namespace without them. That raised the question in a few creative peoples’ minds: Is there an analogue to package managers in the proprietary software world? At different times, both Steve and Tony realized that the answer to this question is yes: it’s app stores.

Like package managers, app stores (these include the Apple Store - which is in fact five stores - as well as Google Play and the Microsoft Store, although there are many smaller stores as well) do the following:

1.      Provide a single location from which to download software;

2.      Control the product namespace within the store, so that each product has a unique name; and

3.      Ensure that each version string is unique for the product to which it applies. For example, the product named Foo won’t have two versions that have the same version string, say “4.11.6”.

In other words, app stores can probably be treated in purl like package managers are treated today. Each app store will have its own purl type, just like package managers do now. Perhaps the most impressive aspect of adding app stores to the purl ecosystem is that, as soon as a purl type is created for a new store, all the products in that store (for example, Google Play currently contains about 3.5 million products) will instantly have a purl. No NVD employee or contractor (or anyone else) needs to do anything to enable this to happen.

What about proprietary products that aren’t in app stores?

The great majority of proprietary software products are not available in app stores, but from the website of either the developer or a distributor. How can purl be expanded to include them?

In the SBOM Forum’s 2022 paper, we provided a two-paragraph high level description of the purl solution we were suggesting for proprietary software, based on an idea of Steve Springett’s:

1.      When a developer releases a new software product or a new version of an existing software product, they will create a short document (called a tag) that provides important information on the product, especially the name, supplier and version string.

2.      When a user downloads that product from the developer’s website (presumably after paying for it), the user will also receive the tag; they can use the information in the tag to create the purl for the product (perhaps like the purl described above)[iv].

Since the supplier created the tag in the first place, when they report a vulnerability for the product to CVE.org, they should use a purl that includes the information from the tag. Thus, the purl created by the user will match the one created by the supplier, since they are both based on the same tag. When the user searches a vulnerability database using that purl, they are sure to learn about any vulnerabilities the supplier has reported for the product.

Rather than create our own format for the product information tag, Steve suggested that we use the existing SWID (“software identification”) format. SWID is a specification (codified in the ISO/IEC 19770-2 standard in 2006) that was developed by NIST. It was originally intended to be the replacement for CPE in the NVD and to be distributed with the binaries for a software product. However, it never gained much traction for that purpose. NIST has dropped the idea of replacing CPE with SWID tags in recent years.

Steve realized that, since SWID is an existing standard and a lot of software products have SWID tags now (for example, for about two years, Microsoft distributed SWID tags with all their new products and product versions), it would be better to use that than to create a new format; this was especially important, since the SWID format includes all the information required to create a usable purl. Steve defined a new purl type called “SWID” and got it be added to the purl specification in 2022. He also developed a tool that creates a purl based on information in a SWID tag.[v]

However, our 2022 document didn’t address two important questions:

1.      For legacy products, if the supplier didn’t create a SWID tag originally, who should create one now? Presumably, it will be the current supplier of the product, even if the product has been sold to a different supplier in the meantime.

2.      How will the user of a product, for which the supplier has created a SWID tag, locate and access the tag? While the supplier could develop a mechanism through which a customer can automatically locate and download the tag from their website, there will soon be a much more universal method for discovering and accessing software supply chain artifacts: the Transparency Exchange API. This is being developed by the CycloneDX project. It will be fully released by the end of 2025, when it will also be approved as an ECMA standard.

How will all of this happen?

The OWASP SBOM Forum believes that, once purl can represent proprietary software products (after the required new types are implemented in the purl specification), the following set of steps[vi] will be set in motion:

1.      A “purl expansion working group” – including members from many different types of organizations – will meet regularly to work out required details for expansion of purl to proprietary software products. The group will publish these details (most likely as OWASP documents). The group will also:

a.      Recruit operators of app stores to participate in the purl community, along with creating a new purl type for each store and submitting the pull request to add that type to the purl specification; and

b.      Conduct tabletop exercises with software suppliers to test the formats and procedures required to implement the purl SWID tag program. This will include testing the purl SWID type definition. This definition was created more than two years ago, but it has only been tested by a few software developers. It needs to be subjected to broader “tabletop” testing.

2.      Private and governmental security organizations (including CVE.org) conduct awareness and training activities for the activities described in this paper, especially regarding the development, distribution and use of SWID tags to create purls for proprietary software products. These activities will target CNAs, software suppliers, security tool vendors, vulnerability database operators and larger end user organizations, including government agencies.

3.      Suppliers create SWID tags for their products, starting with new products and product versions and continuing with legacy products that do not yet have SWID tags.

4.      Suppliers make their SWID tags available through one (or more) of three channels: a) directly to customers, b) in a machine-accessible format on their website, and c) using the Transparency Exchange API, when it is available.

5.      After being trained in purl and the new purl types for proprietary software, CNAs start including purls in CVE records. The purls are based on the suppliers’ SWID tags.

6.      Vulnerability databases based on CVE records (perhaps including the NVD) advertise the fact that users can now find vulnerabilities in proprietary software using purl. They offer training materials (webinars, videos, website content and hard-copy publications) for users.

7.      Users begin to see the advantage of using purl. The primary advantage is that they can deploy fully automated tools for vulnerability identification without having to intervene regularly in the identification process, as is the case with CPE.

8.      As suppliers realize their SWID tags are being accessed by their customers, they also see this is giving them a small but tangible marketing advantage over competitors.

9.      Purl-based open source vulnerability databases see increased traffic once they start accepting the new purl types, as users realize they now have a “one-stop-shop” for identifying vulnerabilities in both open source and proprietary software.

10.   Operators of CPE-based vulnerability databases (especially the NVD) notice that not having to create at least one CPE for every new CVE record saves their staff a lot of time. They also notice that users of those databases are expressing more satisfaction with their experience, since a much higher percentage of the purls they enter are finding their match in the CVE records, than was the case when CPE was the only software identifier available to them.

11.   As CNAs begin to realize that users are taking purl seriously, they add more purls, and fewer CPEs, to CVE records.

12.   The above set of steps cycles continually, until growth of the overall vulnerability database “market” results in continuous growth of both purl and CPE, with roughly constant “market shares”.

The OWASP SBOM Forum is under no illusion that the above set of steps will be accomplished very quickly, given the current rudimentary state of awareness regarding purl and its advantages. On the other hand, the fact that truly automated vulnerability management is currently almost impossible in the NVD makes it even more important that we start implementing a real solution to those problems, while still hoping that the NVD will eliminate their huge backlog of unenriched CVE records in the coming one or two years.

There is good reason to believe that if we start now, within 3-4 years purl will be widely accepted and used to identify vulnerable proprietary software products in most vulnerability databases. We say this because this will be the second time that purl has been quickly accepted. Here is the story of the first time:

Steve Springett[vii] states that in 2017 and 2018, purl had little traction in the open source world, because it was so new. Steve’s Dependency Track and CycloneDX projects, along with Sonatype’s OSS Index vulnerability database, were a few early adopters of purl in 2018. Yet, purl was in wide use in the open source community by 2022. Steve points out that today, purl has been adopted by “most SCA vendors, hundreds of open source and proprietary tools, and multiple sources of vulnerability intelligence.” I would add that purl is used today by literally every major vulnerability database worldwide, other than the NVD and databases based on NVD data. Indeed, purl has “won the war”, when it comes to identifiers for open source software.

Of course, the world of proprietary software is quite different from the open source world, since the participating organizations are sometimes true competitors; that is not often the case with open source software. However, once new purl types are developed to allow identification of proprietary software, it should not require a heavy lift for databases now based on purl to accommodate those new types. This means that, soon after the new purl types for proprietary software have been incorporated into the purl specification, big purl-based vulnerability databases like OSV and OSS Index, which today only support open source software, may quickly support vulnerabilities in proprietary software products as well.

Looking ahead

The OWASP SBOM Forum has recently published a white paper that discusses all the above topics in more detail. It is available for download here. We are actively discussing these topics in our meetings and welcome new participants. Our meetings are every other Tuesday at 11AM ET and every other Friday at 1PM ET. To receive the invitations for these meetings, email tom@tomalrich.com.

We currently expect this to be a two-phase project:

1.      Planning and Design. This will consist of just the first of the above steps. We believe this phase will require no more than 4-5 months of bi-weekly meetings (plus online asynchronous work between meetings, including soliciting participation by app stores and software suppliers and conducting the tabletop exercise to test adequacy of the SWID purl type). This phase will require a modest budget for coordination of those activities.

2.      Rollout. All steps listed above other than the first are included in this phase. This phase can be summed up as “training and awareness”. While training and awareness activities are not inherently difficult, they require large numbers of people to be involved, both on the “trainer” and “trainee” sides. We estimate that this phase will require five to ten times the amount of resources required for the first phase.

We estimate that the first phase will require approximately $50,000 to $100,000 in funding, although we are willing to start work on this phase with less than that amount committed. Since the resources required for the second phase will depend on the design developed in the first phase, we will wait until at least a high-level design is available during the first phase, before estimating the second phase and seeking funding.

We invite all interested parties, including software developers, software security service and tool providers, and end users of software of all types, both to participate and to donate to this effort. Donations (both online and directly) over $1,000 can be made to OWASP and “restricted” to the SBOM Forum[viii]. Any such donations are very welcome (OWASP is a 501(c)(3) nonprofit organization, meaning many donations will be tax-deductible. However, it is always important to confirm this with tax counsel). To discuss a donation of any size, please email tom@tomalrich.com and tony@locussecurity.com. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Due to the NVD’s current problems in creating CPEs, CISA has been designated an “Alternate Data Provider”, who can create authoritative CPEs that have the same status as those created by the NVD contractors. CISA’s “Vulnrichment” program has created many CPEs since their designation, but these are just a fraction of the number required to reduce the backlog.

[ii] Every purl begins with the prefix “pkg”. This prefix is not needed today, but will be in the future.

[iii] Many open source vulnerabilities are not reported to CVE.org, but instead to a vulnerability database like GitHub Security Advisories (GHSA). Many of these databases share their vulnerabilities with the OSV database (managed by Google), where they are displayed using the OpenSSF Vulnerability Format. Most OpenSSF vulnerabilities can be mapped to the CVE format.

[iv] Of course, the user should not have to create the purl manually; the process can be completely automated within a vulnerability management tool.

[v] Steve’s tool requires the user to manually input data from the SWID tag, but the code can of course be adopted for automated use by a vulnerability management tool.

[vi] These steps aren’t a “chain”, since they will ideally happen simultaneously, at least after an initial “startup” period. In general, each step listed depends on the previous step being accomplished.

[vii] In an email on October 19.

[viii] OWASP reserves ten percent of each “restricted” donation to fund administration. That is, OWASP doesn’t simply pass the donation through to the project team – in this case, the SBOM Forum. Instead, as the project team performs work or incurs other expenses on the project, they submit invoices to OWASP, which determines whether they are appropriate before paying them.

Wednesday, October 16, 2024

NERC CIP: What’s the difference between SaaS and BES Cyber Systems in the cloud?

My most recent post concluded with this paragraph:

But that doesn’t mean you have to stay away from the cloud altogether for six years. You can’t deploy medium or high impact systems in the cloud, but you can certainly use SaaS to perform the functions of medium or high impact systems. More on that topic is coming soon to a blog near you.

The post had already made it clear there’s no good way to deploy or utilize medium and high impact BES Cyber Systems (BCS), Electronic Access Control or Monitoring Systems (EACMS) and Physical Access Control Systems (PACS) in the cloud today. Why did I say you can use SaaS to perform the functions of those systems? Isn’t SaaS just software that the vendor has implemented in the cloud for other organizations to access? Why is that different from BCS in the cloud?

The difference is this: If a SCADA vendor implements their software in the cloud with the intention of having multiple users, none of the normal I/O that handles communications with substations and generating facilities will be implemented with it; this is because the I/O is always customer specific. This means the cloud implementation will not have an impact on the BES in 15 minutes or otherwise, so it will clearly not be a BCS. It will be SaaS, which is now “allowed” in the cloud.[i]

However, if the same vendor implemented their software in the cloud for a particular customer and implemented all the customer’s required I/O with it, that would be a BCS in the cloud. This isn’t currently “legal” for medium or high impact systems. Moreover, it will never be permitted until there is a major revision to the CIP standards (fortunately, this long process has at least started).

As I discussed in the previous post, there will still be a compliance obligation for the EMS-as-SaaS, since some of the data it utilizes will be BCSI. This means that, while the obligation to comply will fall entirely on the NERC entity, the SaaS provider will need to provide appropriate compliance evidence, which I described in the previous post. The NERC entity must also take account of the SaaS provider’s use of their BCSI in their CIP-011-3 R1 Information Protection Plan.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] It isn’t likely that most (or even any) SCADA implementations for electric utilities would tolerate not having direct I/O to substations and/or generating stations. Those communications usually need to be as real-time as possible. On the other hand, a renewables Control Center (which manages multiple wind and/or solar installations) will not usually require real-time communications.

Tuesday, October 15, 2024

NERC CIP: Who is responsible for compliance in the cloud?


I have heard NERC entities ask the question in the title at least a few times regarding cloud service providers (note that I am using this term broadly to include not just “Platform CSPs” but providers of cloud-based services like SaaS and security monitoring services). My guess is they’re doing this just to show they have a sense of humor, since the answer is very clear: The entity that is responsible for compliance with any CIP requirement, whether the systems in scope are deployed onsite, in a third party’s cloud, or both, is the entity that is listed in Section 4.1 of each currently enforced CIP standard. That section is titled “Functional Entities”.

Of course, you’ll note there is no Functional Entity called “CSP”. The only entity responsible for CIP compliance is you, Mr./Ms. NERC entity. Even if NERC decided tomorrow that CSPs need to comply with the CIP Reliability Standards, NERC has no authority to enforce such a decision, since its regulatory authority comes from FERC – and FERC has no authority over CSPs, even if they happen to serve NERC entities (should the FDA have authority over CSPs, just because the CSPs provide services to pharmaceutical manufacturers?).

However, saying that the CSP isn’t responsible for CIP compliance is not the same as saying the CSP has no role to play in CIP compliance. If the NERC entity entrusts workloads subject to CIP compliance considerations to a CSP, often only the CSP will be able to provide the evidence required for the NERC entity to prove compliance. But the NERC entity should never assume the CSP knows what evidence they are on the hook to provide, or that they have implicitly agreed to provide it. For the time being, the NERC entity should assume it’s necessary to explain to the CSP exactly what evidence they will need and when they will need it. This would ideally be done during contract negotiations.

Recently, I wrote a post stating there are only two types of workloads subject to CIP compliance that can be safely trusted to the cloud today (meaning no compliance problems are likely to arise from doing so): BCSI used by a SaaS application and low impact Control Centers. I described in nausea-inducing detail what evidence should be required for each, although I need to point out that your mileage may vary, since I certainly don’t know what evidence your auditor will require.

I also pointed out that, unlike for medium or high impact BCS, EACMS or PACS implemented in the cloud, a CSP should be able to provide this evidence without a lot of trouble. But I didn’t point out that I sincerely wonder what kind of response you’ll get when you ask your CSP to take these special measures on your behalf.

Even though I combined both SaaS providers (those that require access to BCSI) and platform CSPs under the “CSP” moniker at the beginning of this post, I’ll break the two categories apart now:

First, I think SaaS providers (who are providing evidence for CIP-004-7 Requirement 6 Part 6.1 compliance) are likely to agree to provide evidence, for two reasons:

1.      They’re a lot smaller than the platform CSPs, and

2.      If they need to utilize BCSI, they’re obviously focused on power industry customers; they at least know that entities subject to NERC CIP compliance can make some strange requests for evidence. Rather than waste time trying to convince the entity that they don’t need that evidence (which is guaranteed to be a losing battle), they should just do what they’re asked to do. Fortunately, if one entity asks for certain evidence, other entities will as well, so the SaaS provider won’t have to provide different documentation for each customer. It’s not like NERC entities will make outlandish requests on their SaaS provider, unless they think it’s likely their auditors will ask for that evidence.

However, platform CSPs (which will presumably be required to provide evidence regarding low impact Control Centers deployed on their platform) are a quite different story:

1.      For one thing, they’re huge; it’s going to be very difficult to get them to agree to do anything that’s not part of their normal services.

2.      For another…how can I say this?...While I haven’t surveyed the platform CSPs on this issue, my guess is they’re not very inclined to bend over backwards for a small sliver - electric utilities and IPPs subject to NERC CIP compliance – of a small industry, namely the electric power industry. In other words, I don’t advise NERC entities to stomp on the floor and scream bloody murder if you don’t succeed in getting the CSP to do what you’re asking them to do. And certainly, don’t threaten to take your business elsewhere – it’s likely to be counterproductive at best.

All this is to say that the chances of convincing a platform CSP to provide compliance evidence for even a low impact Control Center (LICC) in the cloud (and not much evidence is required in that case. I detailed what’s required of an LICC in the post linked above) are very small. Which is another reason why deploying medium or high impact BCS, EACMS or PACS in the cloud now is the stuff of fantasy.

The day will likely come when such systems can be safely deployed in the cloud while maintaining CIP compliance, but that will be under a different set of CIP standards - one in which cloud-based systems (perhaps called “Cloud BCS”) are subject to their own requirements. That day is 5-6 years away, although it’s good there’s now a Standards Drafting Team that’s at least starting the process.

But that doesn’t mean you have to stay away from the cloud altogether for six years. You can’t deploy medium or high impact systems in the cloud, but you can certainly use SaaS to perform the functions of medium or high impact systems. More on that topic is coming soon to a blog near you.

“CIP in the cloud” is one of the most important issues facing the NERC CIP community; its importance is increasing every day. If your organization is a NERC entity or a provider/potential provider of software or cloud services to NERC entities, I would love to discuss this topic with you. Please email me to set up a time for this.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Monday, October 14, 2024

How can we truly automate software vulnerability identification?

Given the proliferation of serious software vulnerabilities like the log4shell vulnerabilities in the log4j library, software vulnerability management is an important component of any organization’s security program. Successful vulnerability management starts with successful vulnerability identification. This requires that:

1.      The supplier of the software reports vulnerabilities they find in their products. These reports are incorporated into vulnerability databases, especially the US National Vulnerability Database (NVD). Almost all software vulnerabilities are reported by the supplier of the software, not a third party.

2.      Later, users of the software can search the NVD for new vulnerabilities that apply to software products they use. Learning about these vulnerabilities enables the user to coordinate with the suppliers of those products, to learn when they will patch the vulnerabilities and encourage them to speed up patches for the most important vulnerabilities.

However, one important assumption underlies these two requirements: that the user will always be able to learn about vulnerabilities that apply to a product they use when they search a vulnerability database like the NVD. The user will only be able to do this if they know how the supplier has identified the product in the database.

It might seem like the solution to this problem is obvious: The supplier will report the vulnerability using the name of the product and the user will search for that name. The problem is that software products are notorious for having many names, due to being sold under different brands or in different sales venues, acquisition by a different supplier, etc. Even among the employees of a large software supplier, their own products may be known by different names. Trying to create – and especially maintain – a database that lists all the names for a particular software product would be hugely expensive and would ultimately fail, due to the rapidly increasing volume of new software products.

Given there will never be a definitive database of all the names by which a single software product is known, how can a user be sure their search will find the correct product in a vulnerability database? There needs to be a single machine-readable identifier for the product, which the supplier includes in the vulnerability report and the user searches for in the vulnerability database. We have already ruled out the idea of a centralized database that lists all the possible names for a single software product. How can we accomplish this goal without a central database?

The solution is for the identifier to be based on something that the supplier will always know before they report a vulnerability for their product, and that the user will also know (or can easily learn) before they search for that product in a vulnerability database. A good analogy for this is the case of the formula for a chemical compound.

If a chemist has identified a compound whose molecules consist of two hydrogen atoms and one oxygen atom, the chemist will write it as “H2O” (of course, the “2” is normally written as a subscript). Every other chemist will recognize that as water. Similarly, a compound of one sodium and one chlorine atom is NaCl, which is table salt. Note that all chemists can create and interpret these identifiers, without having to look them up in a central database. A chemist who reads “NaCl” always knows which compound that refers to.

There is a software identifier that works in the same way. It’s called “purl”, which stands for “package URL”. It is in widespread use as an identifier in vulnerability databases for open source software that is made available for download through package managers (these are the primary locations through which open source software is made available for download, although not all open source software is available in a package manager).

To create a purl for an open source product, the supplier or user only needs to know the product name, the version number (usually called a “version string”) and the package manager name (such as PyPI). Because every product name/version string combination will always be unique within one package manager (although the same product/version might be available in a different package manager), the purl that includes those three pieces of information is guaranteed to be unique; it is also guaranteed always to point to the same product, since the combination of product name and version string will never change for that product/version.

For example, the purl for version 1.11.1 of the package named “django” in the PyPI package manager is “pkg:pypi/django@1.11.1”. If a user wants to learn about vulnerabilities for version 1.11.1 of django in the pypi package manager, they will always be able to find them using that purl. If they upgrade their instance of django to version 1.12.1, they will search for “pkg:pypi/django@1.12.1” (the “pkg” field is found in all purls). Since the supplier will always use the same purl to report vulnerabilities, the user can be sure their search will find all reported vulnerabilities for that product/version.

Besides purl, the only vulnerability identifier in widespread use is CPE, which stands for “Common Platform Enumeration”. Without going into a lot of detail, CPE is the identifier used in the National Vulnerability Database. It was developed more than 20 years ago by the National Institute of Standards and Technology (NIST), which operates the NVD.

A CPE is created by a NIST employee or contractor and added to a vulnerability (CVE) record in the NVD. Unfortunately, there is no way that anyone can predict with certainty the CPE that this person will create. Some of the reasons why this is the case are described on pages 4-6 of the OWASP SBOM Forum’s 2022 white paper titled “A proposal to operationalize component identification for vulnerability management”.

Currently (as of the fall of 2024), there is an even more serious problem with CPE, in that since February the NVD staff has drastically reduced the number of CPEs it creates. The result is that over two thirds of new CVE records entered in 2024 do not have a CPE name attached to them. This makes those CVEs invisible to automated searches using a CPE name. A user that searches with a CPE name today may potentially never learn about two thirds of the vulnerabilities that apply to their product/version.

The upshot of this situation is that, if truly automated software vulnerability management is going to be possible again, purl needs to be the default software identifier, both in CVE records and the National Vulnerability Database. While most of the groundwork for achieving this result has already been laid, there remains one big obstacle: Currently, there is no workable way for purl to identify proprietary software. Since the majority of private and public sector organizations in the world rely primarily on proprietary software to run their businesses, this obstacle needs to be removed, so that users of proprietary software products can easily learn about vulnerabilities present in those products.

The OWASP SBOM Forum has identified two methods by which the purl specification can be expanded to make vulnerabilities in proprietary software products as easily discoverable as are vulnerabilities in open source products today. We will soon be starting a working group to address this problem. If you would like to participate in that group and/or provide financial support through a donation to OWASP, please email me.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. 

Sunday, October 13, 2024

NERC CIP: What is “legal” in the cloud today?


If you have been following this blog for – say – the last eight years or so, you probably know that a big problem in the world of NERC CIP compliance is the fact that NERC entities are severely limited in what kinds of workloads they can implement or utilize in the cloud. While this has been the case for many years, the problem is becoming more acute all the time, as software products and security services announce that henceforth they will only be available in the cloud, or else most of their upgrades and enhancements will only be available in the cloud.

As you may also know, a new NERC Standards Drafting Team (SDT) is now meeting to consider what changes may be required to the CIP standards in order to fix this problem. However, they have a long road ahead of them, as I described in this post in January. I doubt that the final set of new or revised CIP standards will become mandatory for at least 5-6 years from today. This isn’t because NERC is dilatory, but because the NERC standards development process includes many steps designed to ensure that NERC members (as well as members of the public) are able to participate in the standards development process at all stages.

So, the good news is the new (or revised) “cloud CIP” standards are guaranteed to be well thought out. The bad news is this will take a long time. I’m sure many NERC entities want to make more use of the cloud now, but are being held back by uncertainty over what exactly is “legal” today - and especially how they will prove at their next audit that they are still compliant.

I must admit that I can only find two use cases in which I am sure that a NERC entity will be found compliant if they utilize the cloud today (although in both cases there’s a catch, which I’ll describe below).

The first of these is low impact BES Cyber Systems in the cloud, and especially low impact Control Centers.[i] This post describes how – after I was initially skeptical that it’s possible for a CSP to provide evidence of compliance with Requirement CIP-003-8 Requirement R2, especially Section 3 of Attachment 1 - a retired CIP auditor convinced me that in fact this is possible[ii]. However, just because it’s possible doesn’t mean that NERC entities with a low impact Control Center are going to rush to redeploy it in the cloud today. See below.

The second use case is BCSI (BES Cyber System Information) in the cloud. Since BCSI is only defined for information regarding medium and high impact BCS, EACMS (Electronic Access Control or Monitoring Systems) and PACS (Physical Access Control Systems), this isn’t a low impact problem. BCSI in the cloud was effectively verboten before January of this year, but the “BCSI-in-the-cloud” problem was in theory solved when CIP-004-7 and CIP-011-3 came into effect on January 1. Why do we need to discuss this now?

It's because, unfortunately, the single new BCSI requirement, CIP-004-7 Requirement R6, was not written with the most important use case for BCSI-in-the-cloud in mind: SaaS that needs access to BCSI. Instead, the requirement was written for simple storage of BCSI in the cloud. However, why would any NERC entity bother to store their BCSI in the cloud? BCSI is almost never voluminous, and usually, on-premises BCSI can be easily (and inexpensively) enclosed within the NERC entity’s ESP and PSP, with zero compliance risk.

If a SaaS application for say configuration or vulnerability management requires access to BCSI, the wording of the new CIP-004-7 Requirement R6 Part 6.1.1 poses a problem. Here’s a little background:

The first sentence of Requirement R6 reads, “Each Responsible Entity shall implement one or more documented access management program(s) to authorize, verify, and revoke provisioned access to BCSI…”

The second and third sentences read, “To be considered access to BCSI in the context of this requirement, an individual has both the ability to obtain and use BCSI. Provisioned access is to be considered the result of the specific actions taken to provide an individual(s) the means to access BCSI (e.g., may include physical keys or access cards, user accounts and associated rights and privileges, encryption keys).”

In other words, an individual is considered to have “provisioned access” to BCSI when it is possible for them to view the unencrypted data, regardless of whether or not they actually do so. Therefore, if the person has access to encrypted BCSI but also has access, however briefly, to the decryption key(s), they have provisioned access to BCSI, even if they never view the unencrypted data.

Requirement 6 Part 6.1.1 requires that the NERC entity’s access management program must “Prior to provisioning, authorize…based on need, as determined by the Responsible Entity…Provisioned electronic access to electronic BCSI.” In other words, the entity’s BCSI access management program needs to specifically address how individuals will be granted provisioned access.

Note that, if the case for BCSI in the cloud were simply storage of the encrypted BCSI, there wouldn’t be any question regarding provisioned access. No CSP employee should ever need access to the decryption keys for data that is merely stored in the cloud; the NERC entity would retain full control over the keys the entire time that the BCSI was stored in the cloud.

However, if a SaaS application needs to process BCSI, it will normally require that the BCSI be decrypted first. There is a technology called “homomorphic encryption” that enables an application to utilize encrypted data without decrypting it, but unless the application already supports this, it is unlikely to be available. Thus, an employee of the SaaS provider (or perhaps of the platform CSP on which the SaaS resides) will need provisioned access to BCSI, if only for a few seconds.

If the NERC entity needs to authorize provisioned access for the cloud employees, that’s a problem, since that would probably require the SaaS provider to get the permission of every NERC CIP customer whenever they want a new or existing employee to receive provisioned access to BCSI. In fact, each customer would need to give authorization for each individual employee that receives provisioned access; it can’t be granted to for example every CSP employee that meets certain criteria.

Last winter, there was some panic over this issue among NERC Regional Entity (ERO) staff members, along with suggestions that this issue needs to be kicked back to the new “cloud” SDT – which would mean years before it is resolved. However, it now seems that, if a NERC entity has signed a delegation agreement with the SaaS provider (or the CSP), that might be considered sufficient evidence of compliance.

But how can the NERC entity be sure this is the case? Currently, they can’t, since even NERC ERO-endorsed “Implementation Guidance” isn’t binding on auditors (officially, they have to “give deference” to it, whatever that means). However, the closest thing to a document that commits the auditors to supporting a particular interpretation of a requirement is a “CMEP Practice Guide”. This must be developed by a committee of Regional auditors, although they are allowed to take input from the wider NERC CIP community. It is possible that such a Guide might be developed for BCSI in the cloud, and th guide might call for a delegation agreement.

If a CMEP Practice Guide is developed for BCSI in the cloud, it is likely (in my opinion) that it would recommend that a NERC entity sign a delegation agreement with the SaaS provider, if they wish to utilize a SaaS product that utilizes BCSI. Of course, they would do this to demonstrate their compliance with CIP-004-7 Requirement R1 Part 6.1.

I’ve just described the two use cases in which I think cloud use for NERC CIP workloads is “legal” for NERC entities today. However, being legal doesn’t mean the NERC entity’s work is done. To prove compliance in either of these cases, the entity will need to get the CSP to cooperate with them and provide certain evidence of actions they have taken regarding the CIP requirements in scope for that use case.

In the case of a low impact Control Center in the cloud, the CSP will need to provide evidence that:

1.      The CSP has documented security policies that cover the topics in CIP-003-8 Requirement R1 Part R1.2.

2.      The CSP has documented security plans for low impact BCS that include each of the five sections of CIP-003-8 Requirement R2 Attachment 1 (found on pages 23-25 of CIP-003-8). Since sections 1, 2, 4 and 5 all require policies or procedures, and since it is likely that most CSPs will already have these in place as part of their compliance with a standard like ISO 27001/2, proving compliance in those cases should not be difficult.[iii]

3.      The NERC entity's cloud environment permits “only necessary inbound and outbound electronic access as determined by the Responsible Entity for any communications that are…between a low impact BES Cyber System(s) and a Cyber Asset(s) outside the asset containing low impact BES Cyber System(s)” as required by Section 3 of Attachment 1. This section is a little more difficult, since it is a technical requirement, not a policy or procedure. On the other hand, demonstrating compliance with it should be quite simple. Kevin Perry pointed out to me that the NERC entity will normally control electronic access in their environment, so they won't need the CSP to provide them this evidence; they can gather it themselves.

In the use case of BCSI in the cloud, the NERC entity will need to provide evidence that they signed a delegation agreement for authorization of provisioned access to BCSI with the SaaS provider. The entity will also need to provide evidence that the SaaS provider complied with the terms of the agreement whenever they authorized provisioned access to the entity’s BCSI (which will hopefully not be a very frequent occurrence). I believe that evidence will need to include the name of each individual authorized, as well as when they were authorized.

In both use cases, it should not be difficult for the SaaS provider to furnish this evidence, although this will most likely require negotiating contract terms to ensure they do this. Will a SaaS provider agree to this? I hope so.

I’ve identified two cloud use cases that are “legal” today. Are there any others? I really don’t think so, although if anybody knows of one, I’d be pleased to hear about it. It seems to me that all other cloud use cases won’t work today, mainly because they require deploying or utilizing high or medium impact systems in the cloud.

If a CSP did that, they would need to provide the NERC entity with evidence of most of the CIP requirements and requirement parts that apply to high or medium impact systems. For example, they would need to implement a Physical Security Perimeter and an Electronic Security Perimeter in their cloud. Implementing either of those is impossible for a CSP, unless they’re willing to break the cloud model and constrain your data to reside on a single set of systems in a locked room with access controlled and documented.

If they’re going to do that, most of the advantages of cloud use go away, which raises the question why any NERC entity would pay the higher cost they would likely incur for using the cloud for these systems. I don’t think any would do that, which is of course why I doubt there are any high or medium impact BCS, EACMS or PACS deployed in the cloud today.

However, there may be some hope regarding EACMS in the cloud, which may be the most serious of the CIP/cloud problems. Some well-known cloud-based security monitoring services are effectively off limits to NERC entities with high or medium BCS, because their service is considered to meet the definition of EACMS: “Cyber Assets that perform electronic access control or electronic access monitoring of the Electronic Security Perimeter(s) or BES Cyber Systems. This includes Intermediate Devices.” In other words, these services are considered cloud-based EACMS, making them subject to almost all of the medium and high impact CIP requirements.

Some current and former NERC CIP auditors are wondering whether the term “monitoring” in the EACMS definition is currently being too broadly interpreted by auditors. If the standard interpretation of that term (which doesn’t have a NERC Glossary definition) were narrowed, those auditors believe there would be no impact on on-premises security monitoring systems, while cloud-based monitoring systems would have less likelihood of being identified by auditors as EACMS.

If this could be accomplished (perhaps with another CMEP Practice Guide), this would be a significant achievement, since it would allow NERC entities to start using cloud-based security services they currently cannot use. This would increase security of the Bulk Electric System and would not require that NERC entities wait 5-6 years for the full set of “cloud CIP” requirements to come into effect.

“CIP in the cloud” is one of the most important issues facing the NERC CIP community today, and its importance is increasing every day. If your organization is a NERC entity or a provider/potential provider of software or cloud services to NERC entities, I would love to discuss this topic with you. Please email me to set up a time for this.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I don’t think it is likely that NERC entities will deploy loads from either substations or synchronous generating stations in the cloud. This is because both of those environments require a very low level of latency. I believe most NERC entities won’t want to deploy those workloads in the cloud. 

[ii] Of course, low impact systems are subject to compliance with other CIP requirements and requirement parts as well (e.g., having an incident response plan and physical security controls), but most CSPs should have no problem providing evidence for those. 

[iii] There is currently no NERC policy in place that, for “policies or procedures” requirements like these, it is sufficient evidence of compliance to point to where the substance of the requirement is addressed in ISO 27001 (or any other certification. Note that FedRAMP is an authorization for certain federal agencies to utilize the service in question; it is not a certification). However, I would hope it would not be a heavy lift for NERC to create such a policy, perhaps in a CMEP Practice Guide.