Sunday, June 8, 2025

Rules for the Global Vulnerability Database


I recently described my idea for a Global Vulnerability Database. The GVD won’t be a database at all, but rather an “intelligent switching hub” that accepts vulnerability queries that are in the form:

“What Vulnerabilities are found in Product ABC?”, or

“What Products are affected by Vulnerability 123?”

The Product and Vulnerability fields are both intended to be as universal as possible; that is, they should accept all major machine-readable identifiers. For example, the Vulnerability field will accept CVE, OSV, GHSA (GitHub Security Advisory), and other vulnerability identifiers. The Product field will accept CPE, purl, OSV, and perhaps other product identifiers.

While this was not always the case, it is safe to assume that today there is no major vulnerability database that does not accept and/or output machine readable vulnerability identifiers, product identifiers, or both. However, in this regard there are two important differences between the GVD and other vulnerability databases:

1.      With one notable exception[i], it is unlikely there is any vulnerability database today that, in response to a query for vulnerabilities that affect Product ABC, will provide more than one type of vulnerability identifier - for example, both CVE and GHSA. Moreover, with the same exception, it is unlikely there is any vulnerability database today that, in response to a query for products that are affected by a particular vulnerability (e.g., CVE-2025-12345), will provide more than one type of product identifier, e.g. purl and CPE. This is because most vulnerability databases are designed to associate a single type of product identifier with a single type of vulnerability identifier. For example, the NVD only associates CPE names for products with CVE numbers for vulnerabilities; the OSS Index open source database only associates purl identifiers with CVE numbers; etc.

2.      It is also safe to say there is no vulnerability database today that will respond to a query like “Show me vulnerabilities of all types that affect Product ABC”, by displaying all major types of vulnerability identifiers. It’s also safe to say there’s no vulnerability database today that will respond to a query like, “Show me products of all types that are affected by CVE-2025-12345”, by displaying all major types of product identifiers. Yet, my ambition is that the GVD will do both of those things.

However, there is a potential fly in this ointment: There is no way to create an unambiguous mapping either between different types of vulnerability identifiers (e.g., CVE to OSV) or different types of product identifiers (e.g., CPE to purl). Here are several examples:

A. Most vulnerabilities are assigned to products as part of a coordinated vulnerability disclosure process. For example, an open source project (“Project 1”) might report a new vulnerability they have identified in their product to the CVE Program. A CVE Numbering Authority (CNA) will create a new CVE record for the vulnerability and assign it a CVE number like CVE-2024-56789. If the project team also registers the new vulnerability with GitHub, it will receive a GHSA identifier as well. Given that the same team is responsible for both registrations for the vulnerability (CVE and GHSA), the two registrations will usually be considered to identify the same vulnerability.

B. However, if a separate open source project registers a similar vulnerability as a GHSA and asserts it is the same as the vulnerability described in CVE-2024-56789, this assertion may meet with skepticism in the CVE Program, since the two registrations were not by the same team. Since there is no easy way to resolve a dispute like this, the only safe policy is to accept two registrations as being for the same vulnerability only if they were both created by the same organization or person. If that is not the case, the two registrations need to be considered different vulnerabilities.

C. Libraries are widely used by both open source and commercial developers. Usually, a vulnerability will be present in just one module of a library, not all of them. However, since CPE names identify the product that contains the vulnerability and the library itself is the product, this means a CPE name will not usually refer to the vulnerable module[ii].

By contrast, purl (“package URL”) identifies a package. Since each module of a library is its own package, this makes it possible to identify the location of a vulnerability with much more precision.[iii] Thus, there can be no CPE “equivalent” of a purl that references a single library module.

 

To produce this blog, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome, but I consider anyone who donates $25 or more per year to be a subscriber. Thanks!

 

The primary lesson to be drawn from the above examples is that, because there are so many reasons why one type of vulnerability or product identifier will not be “translatable” to another type, it would be a bad idea to try to “harmonize” the identifiers into one type – for instance, make purl the “universal” product identifier or CVE the “universal” vulnerability identifier, with all other identifiers “translated” to one or the other. On the other hand, if it might benefit a vulnerability database user to learn about a vulnerability or vulnerable product that is like the one included in the response to their query, the GVD will usually provide both the exact and the similar match.

This means that, even though the user will usually enter a straightforward query that lists just one or two product identifiers, the response will not necessarily be limited to the same identifiers. The GVD will always assume that the user is interested in seeing as much relevant information as possible, even if they end up discarding some of what they are shown.[iv]

Here are two examples of how a single query might work:

Query 1: “What current vulnerabilities have been identified in the open source project Django version 5.2?”

The query is parsed into three queries to three vulnerability databases:

·        To the NVD: “What vulnerabilities affect Django version 5.2?” The response to this query is this list of four CVE numbers. Each of those can be queried separately for more information on the vulnerability.

·        To GitHub Advisory Database (GAD): “What vulnerabilities affect Django version 5.2?” The response to this query is this list of two CVE numbers, which are both included in the NVD response. The first of the two CVEs corresponds to the GitHub ID GHSA-7xr5-9hcq-chf9, which can be searched on separately. The second CVE corresponds to GHSA-8j24-cjrq-gr2m, which can also be searched on separately.   

·        To Sonatype OSS Index: “What vulnerabilities apply to purl pkg:pypi/django@5.2?”[v] The response to this query is this list of two CVEs. These are the same CVEs shown by the GitHub Advisory Database. However, clicking on either of the CVE lines provides additional information not provided by either the NVD or GAD.

All three results will be provided to the user, as well as results from queries to any other vulnerability database like OSS Index or OSV, if different results are obtained. Note that, while the NVD and GAD queries are identical, the OSS Index query uses the purl for Django v5.2.[vi]

Query 2: “What products are affected by CVE-2021-45046?”

The query is parsed into two queries to two vulnerability databases:

·        To the NVD: “What products are affected by CVE-2021-45046?” The response to this query identifies twelve “Known affected software configurations”, which among them list over 50 CPE names.

·        To GitHub Advisory Database: “What products are affected by CVE-2021-45046?” The response to this query illustrates the fact that there is not always a list of machine-readable software identifiers available. The primary feature of this page is the set of references – security advisories by various developers and manufacturers, including patch URLs. These references need to be parsed “manually”.

Of course, even though the response from the NVD includes machine readable software identifiers and the response from the GAD does not, that doesn’t mean the two responses should not be displayed together. Both responses provide a set of references; it is unlikely that the two sets are identical. Since most queries about CVE-2021-45046 are probably motivated by a search for a patch (this is one of the vulnerabilities associated with the log4shell vulnerability in the log4j library), users will want to see as many references as possible. 

The moral of this story is that a query to the Global Vulnerability Database will usually yield multiple responses. These will include

1.      Responses from databases other than the one originally intended in the query, as well as

2.      Responses generated from queries using identifiers that are similar to, but not the same as, the identifier used in the query.

Of course, the additional queries will not be generated by some mechanistic process, but rather by an intelligent process that will run in the “front end” of the GVD. Does this mean that the front end will run a large language model created by generative AI? No. My opinion (which I’ll be glad to discuss with anybody who thinks differently) is that the decisions on alternative queries in the GVD need to be based on a set of identifiable rules that can be audited.[vii] 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. And please donate as well!


[i] The exception is the OSV vulnerability database.

[ii] In some cases, the person who creates the CPE name creates a “product name” that includes the names of both the library and the vulnerable module. However, there is no consistent procedure for doing this, so it cannot be used for an automated response.

[iii] Because software developers often do not install library modules that are not directly used by their product, this means that a lot of patches for libraries are issued and applied needlessly, since the vulnerable module was never included in the product in the first place. This was the case with the log4shell vulnerability in the log4j library.

Log4shell affected just the log4core module, meaning any developer that had not installed that module didn’t need to patch the library. However, since vulnerability advisories that referred to the CPE name (and thus only designated the log4j library as vulnerable, not the log4core module) didn’t capture this subtlety, many developers probably fell into this category.

[iv] Since some users will not be interested in seeing close matches, a GVD user will be able to suppress display of any match except an exact one. In that case, the output they receive will be close to what they will receive from a search on a single database.

[v] A purl can be easily created using a simple formula and information that a user should have readily available (or else be able to find quickly). In this case, the user just needs to know the package name, version number, and the repository from which they downloaded the package. The repository (known as the purl “type”) is PyPI, which stands for Python Package Index.

[vi] Every purl has a “type” that usually indicates the repository from which the software was downloaded. The purl in this example has the type “pypi”, which refers to PyPI, the Python Package Index. If Django is not available in other repositories than PyPI, this means there is only one possible purl to use in a search for Django in OSS Index. However, if Django were available in other repositories (e.g. package managers), each of those could be used for a separate search in OSS Index, by simply replacing “pypi” with the type for the other package manager and then re-running the search. 

While it might seem odd to search the same vulnerability database three times for the same product name and version number, there is a good reason for doing this: There can be no assurance that a vulnerability that applies to a particular product/version in one package manager will also apply to the “same” product/version in a different package manager. In other words, purl treats products with the same name and version number as different products if they are found in different repositories.

[vii] This is like an early type of AI called “expert system”. These systems were literally created by interviewing an expert in a certain process (e.g., operation of a machine in a manufacturing plant) and codifying their advice into a set of rules. A simulation of the process would then be run, governed by these rules; the rules would be iteratively tweaked to improve the outcome of the process. After the process was running smoothly in the simulation, the rules would then be tested on the physical process itself.

The most important aspect of this procedure was that any change in the rules could be audited. If a rule was changed but that didn’t improve the process, the change would be backed out and a different change would be tried.

Wednesday, June 4, 2025

What assets should be covered in “Cloud CIP”?

 

Before the NERC “Risk Management for Third-Party Cloud Services” Standards Drafting Team can start drafting new or revised standards to make full use of the cloud available for NERC entities subject to compliance with the CIP cybersecurity standards, there are some fundamental questions they need to address. The most fundamental of these is what assets are in scope for compliance with those new or revised standards. It goes without saying that, if you can’t identify what you’re protecting, you can’t determine what protections are necessary.

Each of the current CIP standards states that it applies to BES Cyber Systems (BCS) “and their associated BES Cyber Assets” (BCA). A BCS is defined simply as a grouping of BES Cyber Assets (BCA), which are in turn defined as a special type of Cyber Asset that meets a long definition. A Cyber Asset is a “programmable electronic device”, which means a physical device, not a virtual one. Because a BCS is defined by the Cyber Assets that comprise it, this means the term can only apply to on-premises systems, not systems based in the cloud. There is no way that an individual NERC entity can track, let alone protect, individual devices in cloud data centers.

The SDT’s first step (and one I believe they have already taken) is to determine whether they want their new or revised CIP standards to address protection of both on-premises and cloud-based systems, or just the latter. Fortunately, I believe the SDT has already decided to have two “tracks” in the new CIP standards: one for on-premises assets and one for cloud-based assets.

That is a wise decision, since the 2018 experience of the CIP Modifications drafting team (at least one member of which is also on the Cloud CIP team) demonstrated that it is important not to change the CIP requirements for existing systems unless there is some good reason to do so. There isn’t a good reason in this case. While there are problems with the existing CIP standards, some of them are already being addressed by other drafting teams. The rest should be dealt with separately, not as part of the cloud effort.

Given that the new or revised standards will address only cloud-based assets, the question becomes what those assets should be. Just as in the current CIP standards, there will need to be multiple types of assets, but they will all be cloud-based. While I don’t think it would be a good idea to simply make “cloud versions” of each current CIP asset type (BCS, EACMS, PACS, PCA, etc.), I do think the best place to start would be with BES Cyber System, since that concept has been the foundation of the standards since CIP version 5 came into effect in 2016.

As I’ve just said, the current BCS definition implicitly refers to physical assets, so it can’t be used as the basis for requirements for cloud-based assets. How can we abstract physical assets (specifically, Cyber Assets) from the BCS definition, so we are left with something like “Cloud BCS”?

A BES Cyber System is defined as “One or more BES Cyber Assets logically grouped by a responsible entity to perform one or more reliability tasks for a functional entity.” This means that the essence of BCS is to be found in the BCA definition. I won’t repeat that whole definition here, but its most important component is the “15-minute rule”, which states that a BCA’s loss or compromise must “adversely impact” the Bulk Electric System within 15 minutes. Any Cyber Asset that doesn’t meet that criterion is not a BES Cyber Asset.

This means that, to track with the meaning of BCS as closely as possible without falling into the trap of referring to physical devices, a Cloud BCS needs to be defined as something like “A cloud-based system that if rendered unavailable, degraded, or misused would, within 15 minutes of its required operation, misoperation, or non-operation, adversely impact one or more Facilities, systems, or equipment, which, if destroyed, degraded, or otherwise rendered unavailable when needed, would affect the reliable operation of the Bulk Electric System. Redundancy of affected Facilities, systems, and equipment shall not be considered when determining adverse impact.”

While Cloud BCS isn’t the only type of asset that should be in scope for the Cloud CIP requirements, I believe it should be one of them. After all, if cloud-based assets are going to monitor and/or control the BES, they need to be in scope for at least some CIP standards. And, if cloud-based assets aren’t going to be permitted to monitor or control the BES, why is this drafting team even bothering to meet? They might as well say there’s no reason to change the current situation, in which cloud-based assets that monitor and/or control the BES are effectively forbidden, at least at the medium and high impact levels. 

To produce this blog, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome, but I consider anyone who donates $25 or more per year to be a subscriber. Thanks!

 However, there is another type of asset that is only found in the cloud, that also needs to be defined for Cloud CIP (including having requirements that apply only to it); that is SaaS (software-as-a-service). Of course, not all SaaS is in scope for CIP, but I think any SaaS that is used in the processes of monitoring or controlling the BES should be in scope for Cloud CIP[i]. Perhaps the term might be “BES SaaS”.

Now the question becomes, “What is the difference between BES SaaS and Cloud BCS, since they’re both used in the processes of monitoring or controlling the BES?” I think the difference should be sub-15-minute impact. For example, let’s say a SaaS product analyzes power flows on the grid. When it notices a serious issue, it can change a relay setting without human intervention. Since that change will take effect immediately, the SaaS product clearly has a sub-15-minute impact on the BES; therefore, the SaaS product is a Cloud BES Cyber System.

Now, let’s say the same SaaS product has no connection to a relay. Instead, when it notices a serious issue, it suggests a new relay setting for an engineer working in a Control Center. It will be up to the engineer to decide whether to implement the suggested setting. In this case, there is no 15-minute impact. The SaaS is BES SaaS and should be subject to whatever requirements for BES SaaS are included in Cloud CIP.

Why should we even be concerned about BES SaaS, since it doesn’t have a 15-minute impact? We’re concerned because if someone fed bad information into the SaaS, it might suggest that the engineer do something harmful to the BES, like needlessly opening a circuit breaker and de-energizing a line. In other words, the BES SaaS systems themselves do not require special protection, but the information they receive does need to be protected.

Of course, the information in this case is BCSI – BES Cyber System Information. In fact, I believe that the current requirements that apply to BCSI, namely CIP-004-7 R6, CIP-011-3 R1, and CIP-011-3 R2, should continue in effect when the cloud CIP requirements are implemented. This is because the current BCSI requirements, which came into effect at the beginning of 2024, were developed specifically to make storage and use of BCSI in the cloud possible. Most (if not all) BCSI used in the cloud is utilized by SaaS.

A good example of BES SaaS is a cloud-based historian. Usually, historians are not considered BES Cyber Assets. This is because they are not usually used for real-time monitoring or control of the BES, but rather for after-the-fact review of processes that took place in a plant or other industrial environment.

What other asset types are likely to be in scope for the cloud CIP requirements? Besides BCS, there are two other current CIP asset types, Electronic Access Control or Monitoring Systems (EACMS) and Physical Access Control Systems (PACS). Like high and medium impact BCS, EACMS and PACS are not currently usable in the cloud. This is because the CSP would have to furnish their NERC entity customer with device-level compliance evidence for many CIP requirements. That evidence would be very costly and time-consuming to provide.

However, it is safe to assume that the new and revised Cloud CIP requirements, when finally drafted, approved and implemented, will enable medium and high impact BES Cyber Systems to be implemented in the cloud without an undue compliance burden. Since the CIP requirements that apply to EACMS and PACS are a subset of those that apply to BCS, it is also safe to assume that implementation of EACMS and PACS will be enabled by the new and revised Cloud CIP requirements.

EACMS and PACS are built on the concepts of Electronic Security Perimeter (ESP) and Physical Security Perimeter (PSP) respectively, neither of which is applicable in the cloud. So even though both of these asset types may in the future be implemented in the cloud, it will always be for the purpose of controlling and monitoring ESPs and PSPs.

When the SDT gets to the point of considering requirements for controls applicable to cloud environments, they may identify systems that are required to monitor and implement those controls, such as Cloud Access Security Brokers (CASB). Thus, there are likely to be other asset types in Cloud CIP, besides the ones I have just described. 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. And while you’re at it, please donate as well!


[i] An alternative way of stating this might be that any SaaS product that performs one of the BES Reliability Operating Services (BROS) is BES SaaS. While the BROS long ago ceased to have any direct compliance impact in CIP, they are still a very useful concept for deciding whether or not a Cyber Asset is a BES Cyber Asset. For a description of the BROS, see pages 17ff of this old version of CIP-002. For a long explanation of how the BROS can be used, see this post.

Tuesday, June 3, 2025

Saving CVE isn’t enough. We need a Global Vulnerability Database.

For reasons I don’t understand, two of my recent posts, along with one that’s a year old, have suddenly become huge favorites and are getting many more pageviews than normal. This has happened before with other posts; I’ve usually blamed AI training for that. However, even if that’s the case with these three posts, I’m beginning to think that somebody or something is trying to send me a message about what I should be focusing on in my blog posts. Here’s why I say that:

The first post is dated April 11, 2024 and is titled “It’s time to figure out this whole vulnerability database problem.” It starts by discussing the problems with the National Vulnerability Database that started February 12, 2024. As you probably know if you’ve been reading this blog for a while, on that date the NVD drastically slowed their performance of their most important job: creating “CPE names” to add to new CVE Records that have been downloaded from CVE.org (which is where a new CVE Record first appears after a CVE Numbering Authority – or CNA - creates it). CPE is a machine-readable software identifier. It is currently the only software identifier supported by the NVD and the CVE Program.

Briefly, the reason this created a problem is that CVE Records describe a vulnerability and provide a textual description of the software product in which the vulnerability was found (e.g., “Product ABC version 4.5”). While a textual description of the vulnerable product is adequate for many purposes, it will not show up in an automated search using the CPE for the product (e.g., a CPE name that is created using the product name, version number and vendor name included in the textual description of the vulnerability). Of course, since there are now close to 300,000 CVEs, the only searches that are worth doing are automated ones – reading the text of 300,000 CVE Records isn’t anybody’s idea of fun.

This is a big problem, since it means that a user searching for vulnerabilities in product ABC version 4.5 using the proper CPE name is unlikely to see any vulnerabilities that were identified since February 2024. Even worse, the user won’t be notified of this problem, meaning they are likely to interpret absence of evidence (no vulnerabilities identified when they search for a product and version that they use) as evidence of absence (i.e., evidence that the product is vulnerability-free, even though it might contain many vulnerabilities identified after February 2024 that don’t appear due to the lack of a CPE name). Of course, when I wrote the post, I didn’t know that a year later, the problem would be worse not better, or that the backlog of “unenriched” CVE Records would have grown to the point that it’s clear it will never be eliminated.

Given this background, I called for a group to start meeting under the auspices of the OWASP SBOM Forum to discuss alternative vulnerability database options. After all, the NVD isn’t the only vulnerability database in the world; there are other good options available, especially when it comes to learning about vulnerabilities identified in open source software. However, there is no complete replacement for the NVD, which covers open source and commercial software products, as well as intelligent devices. In fact, for commercial software and intelligent devices there is no replacement for the NVD at all. Even though there have always been a lot of problems with the NVD’s coverage of both of those product types, at least the NVD provided something. Now, the NVD can be trusted[i] only for historical data before 2024.

That group started meeting and continues to do so; we meet on Zoom on Tuesdays at 11AM Eastern Time. We have some great discussions and have identified actions that need to be taken (such as helping plan and execute the introduction of purl identifiers into the CVE ecosystem along with CPE), once funding becomes available for them. If you would like to join this group, let me know (we are having similar discussions, including discussions regarding the CVE situation, in the SBOM Forum meetings, which are on Fridays at 1PM ET. You are welcome to join those meetings as well).

At the end of the first post, I suggested that spending a lot of time and effort trying to fix the NVD’s problems was a fool’s errand; nothing that has occurred since then makes me want to change that judgment. Instead, I suggested there should be an internationally financed vulnerability database, perhaps under the sponsorship of IANA. IANA administers IP addresses and runs the DNS program. It is supported by governments and private organizations worldwide.

Moreover, I suggested that such a “database” would be (with some paraphrasing):

…a federation of existing vulnerability databases, linked by an intelligent “switching hub”. That hub would route user inquiries to the appropriate database or databases and return the results to the user. Using this approach would of course eliminate the substantial costs required to integrate multiple databases into one, as well as to maintain that structure going forward. It would also probably eliminate any need to “choose” between different vulnerability identifiers (e.g., CVE vs. OSV vs. GHSA, etc.) or different software identifiers (CPE vs. purl). All major identifiers could be included in searches.

The second post that is getting so many pageviews lately was created on April 15, the day MITRE’s letter to CVE.org Board of Directors members was revealed; that letter stated that their current contract to run the CVE Program would expire the next day, if not renewed. Fortunately, CISA did renew the contract the next day and the program – which meets with almost universal good will – will continue as is through next March. It is unlikely that CISA, even if it still exists in its current form (which is itself unlikely), will renew the contract in March.

Fortunately, members of the CVE.org Board have formed a new nonprofit organization called the CVE Foundation. This group has already received substantial funding commitments that will allow it to continue (and improve) the current programs of CVE.org if and when the contract is not renewed. Therefore, I believe we don’t need to worry about whether the CVE Program will continue to operate; it will continue, and it will improve on what was already considered (including by me) to be a good operation.

However, note that the NVD is a separate organization (part of NIST, which is part of the Department of Commerce) from CVE.org (which is operated by MITRE under contract with CISA, which is part of DHS). Thus, the fact that CVE is on good long-term footing doesn’t mean that the NVD’s problems have been solved or are even on the way to being solved. As you may have guessed, I think it’s time to cut the tether and let the NVD drift downstream while we replace it with something much better.

That “something” is described in the third post that is getting a lot of attention nowadays; I wrote it five days after the second post. I’ll let you read what I said, but I’ll summarize it by saying

1.      The Global Vulnerability Database doesn’t have to be directly tied to the CVE Program or the CVE Foundation, any more than the NVD is tied to the CVE Program now. In fact, since the CVE Foundation’s funders are probably not anticipating having to create a new vulnerability database along with extending the existing CVE Program, it shouldn’t be assumed they will automatically be on board with the idea of creating the GVD – which, of course, will require much more effort than extending and enhancing the CVE Program.

2.      I also don’t think the GVD should be tied to any existing vulnerability database, whether it’s the NVD, the new EUVD, or any other. The GVD should be a true federation of existing databases. It shouldn’t in itself be a new database, although there might be a need to develop one or more databases to address particular needs. For example, now that the NVD is crippled, I know of no vulnerability database that addresses intelligent devices. It would be a good idea to create such a database (perhaps based on the GTIN/GMN identifiers used today in global trade, as suggested at the end of the SBOM Forum’s 2024 white paper on software naming problems in the NVD). This database would then “join the federation” of databases included in the GVD.

3.      The GVD will not attempt to “harmonize” the identifiers for either software products or vulnerabilities. This is because different identifiers inherently identify different things and should not be expected to “map” to each other well. For example, purl identifies open source software packages; if an open source library like log4j includes multiple modules[ii], they might each have their own purl. However, CPE can only identify the library itself, not its individual modules. Therefore, there will be no CPE that is equivalent to a purl that identifies a module.

4.      As much as possible, queries to the GVD should not require the user to understand anything about the databases that contribute to it. The user should be able to enter a text string like “Adobe Acrobat Professional” and be shown all machine-readable software identifiers that contain that string in some fashion. These identifiers might be CPE names, but they might also be purl or other identifiers. Similarly, the search may reveal vulnerabilities identified with CVE numbers, OSV identifiers, or GitHub Security Advisories (GHSA).

5.      Moreover, those identifiers might be found in different vulnerability databases, meaning that a single search might be directed toward multiple databases. For example, a search for an open source product might be directed both to the NVD and the OSS Index open source software database. If both searches are successful, both results will be returned to the user.

6.      If the user receives two answers to one search, which one is “correct”? If they both return the same vulnerability identifier (e.g., CVE-2024-12345), the user would be well advised to apply the patch for it as soon as possible. If they return two identifiers (for example, a CVE number and a GHSA advisory) that might point to the same vulnerability, the user might just apply one of the two patches. And if they return what are clearly two different vulnerabilities, the user should probably apply both patches.

As you can see, using the Global Vulnerability Database will provide a much richer experience than using the NVD. In fact, it will be the equivalent of launching simultaneous queries for the same product in multiple major vulnerability databases worldwide. The query for each database will be tailored for that database, meaning it won’t include software or vulnerability identifiers that aren’t “native” to that database. And the response returned by each database will include everything that is normally included in such a response. For example, if the database usually returns a CVSS score for each vulnerability, that will continue for queries that come through the GVD.

To produce this blog, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome, but I consider anyone who donates $25 or more in one year to be a subscriber. Thanks!

Of course, I’m sure there are some users of the NVD – especially those that use it just for compliance purposes – that won’t be interested in receiving a much richer experience by using the GVD. If they have always learned about vulnerabilities by searching using a CPE identifier, they may not be interested in results obtained using a purl identifier. Such people will always be able to utilize the results they are used to and discard the results they aren’t used to.

However, other users will be pleased to be able to compare results obtained by submitting their original query to multiple vulnerability databases, each in the “language” spoken by that database. They will be even more pleased to receive the results produced by different databases, each one “enriched” as the database normally does.

Rereading the above two paragraphs may give you an idea of the powerful “front end” that will receive GVD queries and reword them as appropriate for the individual vulnerability databases that are part of the GVD federation, as well as standardize the format of the responses from different databases.[iii] This will be the key to making the GVD work as planned. While this will be as “intelligent” an application as possible, it will need to follow pre-determined, auditable rules.

If you are interested in being part of the planning process for the GVD, please let me know. I am willing to start a separate workgroup for this; I’ll set the meeting time (probably bi-weekly) to fit the schedules of the interested parties. Also, if your organization would be able to donate $1,000 or more to this effort, you can make a donation to the OWASP Foundation, which will be “restricted” to the SBOM Forum’s GVD workgroup (OWASP is a 501(c)(3) nonprofit organization).  

I’m looking forward to starting this effort! To be sure, it will be a long one – but when we’re finished, it will be something we will all be proud to have contributed to. 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] There have always been problems with the CPE identifier, especially when it comes to open source packages and intelligent devices. For a discussion of the problems with product identification in the NVD as well as suggestions for addressing those problems, see the OWASP SBOM Forum’s 2022 white paper.

[ii] Such as the log4core module, in which the log4shell vulnerability was found.

[iii] For a vulnerability database to be part of the GVD, it will not be required to make substantial changes that it otherwise would not have made. Each database will maintain its autonomy; if there is a disagreement between the database and the GVD that cannot be resolved by negotiation, the database will always be free to leave the GVD.

Monday, May 26, 2025

Who will audit the CSPs?

One question that the current NERC “Risk Management for Third-Party Cloud Services” drafting team is discussing a lot is: What actions need to be taken to verify the security posture of the CSPs? By CSP I mean the two or three major “platform” cloud service providers, not other providers, like providers of SaaS and security services that are usually deployed on one of the major CSP platforms.

This question came up before, when CIP-013, the NERC supply chain cybersecurity standard, was developed. In that case, the answer to the question was quite clear: Like all NERC Reliability Standards, the CIP standards only apply to entities that own and/or operate facilities that are part of the Bulk Electric System (BES). Third parties that supply hardware, software and services that go into medium and high impact BES Cyber Systems do not fall under that designation. They do not have to comply with any of the CIP standards directly, but they do have to cooperate with their NERC entity customers, in some cases by providing them with evidence they need to present at audit.

Will the same consideration apply in the case of platform CSPs once the “Cloud CIP” standard(s) are in place (even though those standards are still years from being enforced)? After all, the platform CSPs will furnish not only the hardware and software on which BES Cyber Systems (and other systems like EACMS and PACS) operate, but they will also furnish the entire environment, including full IT staff, in which the hardware and software operate. Will that consideration “push them over the line” into being regulated as NERC entities?

No, it won’t. It’s qsafe to say that the CSPs will never be regulated as NERC entities, even if a significant amount of generation capacity (e.g., DERs) is deployed on their platforms. Platform CSPs also host systems that control pharmaceutical production. Does that mean they should be regulated by the FDA? Or, since farmers’ tractors now depend on cloud data to determine exactly where to drop seeds, should the CSPs be regulated by the Department of Agriculture?

On the other hand, there’s no question that the electric power industry needs to conduct some oversight of the CSPs. How should this be done, if they aren’t NERC entities? The answer in CIP-013 was for the NERC entities themselves to be responsible for vetting the suppliers. The easiest (and by far the most widely used) method for a customer organization to assess a CSP’s security is simply to ask the CSP about their certifications. If the customer likes the names they hear – FedRAMP, ISO 27001, etc. – those will often be all they want to hear.

Of course, the problem with this method is that it hides the details of the certification audit. The audit may have uncovered an issue with, for example, the CSP’s employee remote access procedures, but their customer won’t learn about this unless they ask to see the audit report. A better assessment method is to ask the CSP for their certification audit report and query them about any issues noted in the report.

However, there’s a problem with this second method as well: I’ve heard employees of two huge platform CSPs that are widely used by the electric power industry state unequivocally that although their employers will be pleased to provide access to FedRAMP, SOC 2 Type 2 or ISO 27001 compliance assessment materials, they can’t provide individualized responses to security queries by NERC entities. In other words, the NERC entity will be able to learn about issues that came up during the compliance audit, but they won’t be able to ask about them.

If certification audits always asked every question that’s relevant to a CSP’s level of security, that would be fine, since the audit report would provide a complete picture of the CSP’s security posture. But is this really the case? In short, the answer is no. Consider the following Tale of Two CSPs:

CSP 1

In 2019, one of Platform CSP 1’s customers suffered a devastating breach in which over 100 million customer records were accessed. The attacker was a CSP 1 employee who had been laid off and was out to get revenge. The attacker took advantage of a misconfigured web application firewall and later bragged online that lots of CSP 1’s customers had the same misconfiguration. In fact, the attacker asserted they had been able to penetrate the cloud environments of over thirty CSP 1 customers by exploiting that misconfiguration.

Of course, the platform CSP can’t be blamed for every customer misconfiguration. However, the fact that at least thirty customers of that one CSP all had the same misconfiguration points to a clear problem on the CSP’s part: They need to offer free training to their customers about avoiding this misconfiguration, as well as any other common misconfigurations (in fact, I ended up meeting with CSP 1 later, after I had made this suggestion in one of my posts. They were receptive to what I said, and I imagine they followed up as well, if they hadn’t done so already).

CSP 2

CSP 2’s problem was that they utilized third party access brokers to sell access to a popular hosted service on their platform, but they hadn’t adequately vetted their security. At least one of these access brokers was compromised, leading to the compromise of about 40 of their customers; all the compromises were on CSP 2’s platform.

But there’s more: This may have been a multi-level supply chain attack. Not only did the compromise of the access broker lead to the successful attacks on 40 customers of the access broker, but at least one of those 40 customers later was the victim of a hugely consequential attack. Have you heard of SolarWinds? I thought so. They were one of the access broker’s customers that was compromised, although I don’t think it’s been proven that the attackers got into SolarWinds through the broker.

There are three important lessons to be learned from these two attacks:

1.      Both CSPs were up to date on their security certifications.

2.      Because the failings of both CSPs should have been discovered in their compliance audits if the right questions had been asked, it’s a safe bet that none of the major security certifications for CSPs addresses either of these failings. Of course, there have been a number of other cloud breaches in recent years whose root causes are also probably not addressed by the major certifications. 

3.      Therefore, it would be a mistake to rely on one or more certification audit reports to provide a good picture of the state of security of a platform CSP.

This leaves us with a problem: Since a certification audit report won’t give us a complete picture of a platform CSP’s security posture, something more should be required by the new “Cloud CIP” standard(s). That is, there will need to be a custom assessment of the CSP that will go beyond the set of questions that the certifications ask. However, the NERC entities can’t be required to ask these questions, since I’ve already pointed out that the platform CSPs are not going to answer questions from individual NERC entities. How do we square this circle?

The only viable path out of this problem that I can see is for NERC itself, or a third party engaged by NERC, to conduct the assessment. Specifically:

a.      A NERC committee (perhaps the current “Cloud SDT”, although it might be a separately constituted committee) will review records of cloud breaches (like the two above), as well as other vulnerabilities and attacks that are only likely to affect the cloud (and thus are not likely to be included in audits based on standards like ISO27001. Based on this review, they will compile a list of perhaps 10-15 questions to pose to a platform CSP. For example, “How do you vet the security of cloud access brokers that utilize your platform?”

b.      Every year, the committee will conduct a new review and update the list of questions.

c.      Every year, NERC or a designated third party will get oral and/or written answers from the platform CSPs to the questions on the current list.

d.      NERC will summarize the responses received from each CSP and share the summaries with NERC entities that utilize the CSP for OT applications.

e.      NERC will not draw from the questionnaire responses any conclusion regarding the suitability of the CSP for use by NERC entities. Each entity will need to decide for itself whether to continue using the services of the platform CSP, based on their questionnaire responses, certifications, news articles, or any other evidence they wish to consider.

There’s only one fly in the above ointment, although it’s a big one: The current NERC Rules of Procedure would never allow what I’ve just described to happen. I would hope (but I can’t be sure) that there will be broad support for making this change to the RoP, but that change will probably need to be made through a separate process; that will take time (probably at least a year). This is why I still doubt that the new Cloud CIP standards will be approved and in effect before 2031 or 2032 – unless some action is taken to accelerate this whole process.[i]

To produce this blog, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome, but I would greatly appreciate it if regular readers can donate $20 per year – consider that my “subscription fee”. Thanks!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. And while you’re at it, please donate as well!


[i] It isn’t out of the question that the standards development process could be accelerated, or more correctly, bypassed. There’s a section of the NERC Rules of Procedure that provides for this, but it requires a special vote of the Board of Trustees. However, this is no longer out of the question, since the urgency of making the cloud fully “legal” is growing all the time.

Saturday, May 24, 2025

Can you accept risk in NERC CIP?

I haven’t been able to attend many of the meetings of the Risk Management for Third-Party Cloud Services Standards Drafting Team (SDT), which started meeting last fall. However, I was able to attend last Thursday’s meeting. I was quite pleased to see that the team is now starting to discuss some of the weighty issues that the group will have to address as they draft the new and revised standards that will be required to make full use of the cloud “legal” - for NERC entities with high or medium impact NERC CIP environments. The weightiest of these issues are those that have to do with risk.

In many ways, this SDT resembles the CSO706 (“Cyber Security Order 706”) SDT, which in 2011 started drafting the CIP version 5 standards. CIP v5 completely revised the framework that was in place for CIP versions 1-3. Those three versions were based on the concepts of Critical Assets, Critical Cyber Assets, and the Risk Based Assessment Methodology (which was the opposite of risk-based). In CIP v5, they were replaced with new concepts like Cyber Asset, BES Cyber Asset, BES Cyber System, the Impact Rating Criteria, Electronic Security Perimeter, Interactive Remote Access, Intermediate System, External Routable Connectivity, Electronic Access Control or Monitoring System, Physical Access Control System, and more.

Of course, when they started to work on CIP v5 in early 2011, the CSO706 SDT didn’t set out to discuss all of these concepts; instead, the need for the concepts arose from the team’s discussions of the problems with the then-current CIP v3 framework. In fact, the more controversial topics, like External Routable Connectivity and the meaning of the term “programmable” in the Cyber Asset definition, continued to be discussed - sometimes heatedly - in the 2 ½ years between FERC’s approval of CIP v5 in November 2013 and when the v5 standards came into effect (simultaneously with the v6 standards) on July 1, 2016.

So, I was pleased to find out yesterday that the “Cloud” SDT’s discussions have already ventured into the conceptual realm; the changes required to bring the NERC CIP standards into the cloud era (which almost every other cybersecurity framework entered at least a decade ago) will beyond a doubt require both new concepts and fresh thinking about old concepts. Any attempt to just start writing the new (or revised) requirements without first agreeing on the concepts that underlie them is guaranteed to be unsuccessful.

One of the many concepts that came up in Thursday’s meeting was acceptance of risk. Of course, in most areas of cybersecurity, it is almost a dogma that successful practice of cybersecurity requires being able to accept some risks. After all, there will always be some cyber risks that, for one reason or another, an organization can do nothing about. As long as the risk is accepted at the appropriate level of the organization, the organization has done all it can.

During the meeting, someone pointed out the need for NERC entities to be able to accept risk in cloud environments. I pointed out that the requirements in CIP version 1 almost all explicitly allowed for “acceptance of risk”, if the NERC entity decided that was a better course of action than complying with the requirement. However, FERC’s Order 706 of January 2008 approved CIP v1 but at the same time ordered multiple changes; these were at least partially incorporated into CIP versions 2, 3, 4, and especially 5.

One of the changes FERC mandated in Order 706 was for all references to acceptance of risk to be removed. FERC’s reasoning was that the purpose of CIP is to mitigate risks to the Bulk Electric System (BES). Each NERC entity that is subject to CIP compliance is essentially acting as a guardian[i] of the BES, or at least of whatever portion of the BES is owned and/or operated by the entity. Therefore, the risks addressed by the CIP requirements aren’t risks to the NERC entity. This means the entity doesn’t have the choice of whether or not to accept them – their role in the BES requires they not accept them.

However, not accepting a risk isn’t the same as not doing anything about it. Since no organization has an unlimited budget for risk mitigation, any organization needs to decide which risks it can afford to mitigate and which ones it can’t afford to mitigate. Often, those are risks with high impact but very low probability – e.g., the risk that a meteorite will crash through the roof of a building.

If there were a significant probability that this could happen, it would be worthwhile to strengthen roofs enough to deflect meteorites, at least up to a certain size. However, because the likelihood of this event occurring is tiny, most organizations have at least implicitly decided they will be much better off if they allocate their limited mitigation resources toward risks with a higher likelihood – hurricanes, tornadoes, hailstorms, etc.

Until CIP version 5 appeared, there were no CIP requirements that permitted consideration of risk: You had to do X or else face a fine. We now call these prescriptive requirements. With prescriptive requirements, it doesn’t matter if the requirement mandates actions that mitigate only a tiny amount of risk, while neglecting actions that might mitigate much greater risks but aren’t required; the NERC entity needs to perform the required actions first and leave the other actions for later, if time permits.  

One example of a prescriptive requirement today is CIP-007 R2 Patch Management. This requires the entity to apply every security patch released by the patch source for a particular Cyber Asset in scope for the requirement. It doesn’t matter if the software vulnerability addressed by the patch has already been mitigated in that Cyber Asset, for example by a firewall rule; the patch still needs to be applied.[ii]

Fortunately, the NERC CIP community seems to have learned its lesson regarding prescriptive requirements. CIP version 5 introduced the first of what I call risk-based CIP requirements, including CIP-007-5 R3 Malware Protection and CIP-011 R1 Information Protection[iii]. Since then, almost every new CIP standard or requirement has been risk-based.

Given that prescriptive requirements are usually tied to individual hardware assets and it is virtually impossible to pinpoint on which cloud asset a process is running at any one time, it is inevitable that cloud-based CIP requirements will be risk-based. In fact, as I mentioned at the end of this post, it seems to me that implementing risk-based requirements for the cloud may well require changes to the NERC Rules of Procedure. Stay tuned.

To produce this blog, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome, but I would like all regular readers to donate $20 per year – consider that my “subscription fee”. Thanks!


If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. And while you’re at it, please donate as well!


[i] The idea of guardianship is my invention, not something FERC stated in Order 706.

[ii] I believe that most CIP auditors will not issue a notice of potential violation in a case where a NERC entity hasn’t applied a patch that clearly isn’t needed. However, the strict wording of CIP-007 R2 doesn’t allow for such exceptions.

[iii] NERC calls requirements like this “objectives-based requirements”. I think of the two terms as being virtually synonymous, since it’s hard to achieve any objective without considering risk, and risk usually arises along with attempting to achieve an objective.