Thursday, May 15, 2025

What I mean by a federated vulnerability database


I’ve been writing about a Global Vulnerability Database for at least a year; my most recent post on that topic is this one. What most people probably think when they hear that term is that I’m proposing one big database that will somehow combine all or most of the existing vulnerability databases. Since a vulnerability database requires machine-readable identifiers for software (e.g., purl and CPE) and vulnerabilities (CVE, OSV, GHSA, etc.), and since different vulnerability databases often use different identifiers, combining these databases into one usually means “harmonization” of the identifiers – i.e., “mapping” multiple identifiers into one.

For example, harmonization of software identifiers might mean mapping CPE names to “equivalent” purl identifiers, or vice versa. Or maybe both purl and CPE names will be mapped to a single third identifier to be named later. But here’s the thing about identifiers, whether we’re talking about identifiers for vulnerabilities, identifiers for software products, or both: They can almost never be cleanly mapped to each other. If they could, why would there be multiple identifiers in the first place?

Here's an example of what I mean: I’ve written a lot about purl and CPE. In this post, I described how a purl usually identifies a particular software package which is made available in a package manager. Since sometimes the same (or closely similar) software is made available in multiple package managers, a purl includes the name of the package manager. This ensures that purls will be unique, since the operator of a package manager makes sure there are no duplicate names; this is called a controlled namespace.

This also ensures that, if a “single” package is distributed through multiple package managers (as sometimes happens), there will be no confusion about which package manager we’re talking about. Since the purl includes the “type” that corresponds to the package manager, the purl always tells us which package manager is being referred to.

This is especially important, because usually there will be slight variations in the product between package managers, even if they’re in theory the “same” package – e.g., OpenSSL version 3.1.8. Since the purl differs between the package managers, and since a vulnerability might be present in the same product in one package manager but not another, it’s important to know which package manager is the source of the codebase your organization uses.

However, there usually will be confusion with CPE, since CPE doesn’t have a field for “package manager”. Sometimes, the person who creates the CPE builds the package manager name into the product name in the CPE, but more often there is no way to tie the CPE name to a particular package manager. This means there’s no way to directly map a purl for an open source product distributed in a package manager to a particular CPE. There are many other examples in which a software product identified with CPE or purl (or OSV, the other major software identifier) can never cleanly map to another identifier.

The same holds true for vulnerability identifiers. CVE is by far the most widely used vulnerability identifier, but there are others like GHSA (GitHub Security Advisories), Snyk ID and OSV. There’s no way to say upfront that CVE XYZ maps directly to GHSA ABC. However, often the organization that identifies a vulnerability will report it as a new CVE Record and at the same time report an ICSA (CISA’s ICS Security Advisory), for example. If the same organization did both reports (and especially if that organization is also the supplier of the product being reported on), there shouldn’t be any objection to the fact that the two identifiers can’t usually be directly mapped to each other. They’re “mapped” because they came from the same organization.

This is all a long way of saying that there’s no such thing as “harmonization” of either software or vulnerability identifiers. And if there’s no harmonization, this means the Global Vulnerability Database (GVD) can’t be a single database.

That’s why I call the GVD a “federated” database. Offhand, that term – federated database – might seem like an oxymoron. A database usually gives a single answer, but a federated database must inherently give multiple answers. However, when I use that term, I mean there are multiple databases, but they (almost) speak with one voice. There needs to be an “intelligent front end” that takes all the queries, routes them to the relevant individual database(s), and routes the answers back to the user.

What the federated database doesn’t do is somehow combine the answers from the different databases into a single “harmonized” answer. When there are different identifiers involved, there can’t be a harmonized answer. But that doesn’t mean it’s not worthwhile to receive multiple answers. 

For example, suppose a GVD user entered a purl for an open source product and requested all vulnerabilities – of all types – that affect that purl. They might get four different responses: 

1.      The front end could query OSS Index, an open source database that supports purl and identifies vulnerabilities using CVE. That query would return one or more CVEs that affect the product designated by the purl.

2.      The front end could query GHSA, which also supports purl. GHSA might return a CVE Record, an OSV advisory, a GHSA advisory, or even two or three of those.

3.      The front end could query OSV, which also supports purl. OSV will usually return an OSV advisory, but it could also return a CVE.

4.      Since the front end is intelligent, it might query the National Vulnerability Database (NVD) and notice that there’s a CPE identifier that probably corresponds closely to the purl in the original query. Therefore, it would conduct a query using that CPE, and return one or more CVE Records that reference that CPE. 

In other words, my Global Vulnerability Database won’t even attempt to deliver harmonized responses. Instead, it will provide you with every response it receives from any of the federated databases. If you’re the sort of person who wants just one answer, you might not appreciate this arrangement. But if you understand that vulnerability management is an inexact science – in fact, it isn’t a science at all – you might appreciate having a diversity of information sources to compare. 

Someday, it may be possible really to harmonize the responses from the GVD, so that people who want a single answer and people who value diversity might both be satisfied. But we’re not there now.

To produce this blog, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome. Thanks!


If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. And while you’re at it, please donate as well!

 

Wednesday, May 7, 2025

The importance of being negative

The future of the CVE Program has been discussed a lot lately due to funding problems; I have contributed to those discussions. However, most of my posts that have mentioned the program in the last six months or so have been about software identification – that is, the importance of being able to accurately identify, using a machine-readable identifier, the software product or products that are affected by a vulnerability described in a CVE Record.

Before February 2024, there wasn’t a lot of discussion about the merits of different software identifiers, since the two most widely used identifiers – CPE and purl – each had their own well-understood place in the cybersecurity world. CPE reigned as king of the NVD and the other databases that are built on the NVD; on the other hand, purl was the king (queen?) of the open source world. While the NVD – which is firmly in the CPE camp - tracks vulnerabilities in both open source and commercial software, it isn’t the preferred source for the former, while it’s almost the only source for the latter.

However, in February 2024 the king stumbled. NVD staff (really contractors) drastically reduced, and at times completely stopped, their performance of their most important task: adding CPE names to new CVE Records. The records always include a textual description of the affected products, but there’s no good way to search these. But if the record for CVE-2025-12345 includes a machine-readable CPE software identifier (the only identifier currently used in CVE Records), and an NVD user searches for vulnerabilities using the identical CPE name, CVE-2025-12345 should always show up in the results.

What happens if the CPE name that’s searched for is just a little bit different from the CPE name that’s included in the CVE Record? In that case, it’s likely that no vulnerabilities will be shown to the user. What will be shown? Every NVD user’s favorite message: “There are 0 matching records.”

Will the user be crestfallen when they see this message? Not necessarily. In fact, they might be pleased, since that is the same message they will receive if there are no vulnerabilities listed at all for the product they’re searching for. In other words, the same message might mean both “This product has lots of vulnerabilities that affect it, but you need to keep guessing the CPE name in order to learn about them” and “The product you’re searching for has no reported vulnerabilities.”

The main problem with CPE is that there are lots of ways that the CPE that a user searches for might not match the CPE that is included with the CVE Record. This is because there is no way to know exactly how the NVD contractor that created the CPE name filled in the various fields. These include:

·        The vendor name in the CVE Record is “Microsoft Inc”, but the user searches for “Microsoft Inc.” (i.e., with a period) and finds nothing.

·        The product name in the CVE Record includes the text “mail integration”. The user searches for “mailintegration” and finds nothing.

·        The vendor name is “apache foundation”. The user searches for “apache_foundation” and finds nothing.

The big problem with these near misses isn’t just that the user won’t learn about a vulnerability that might apply to the product they’re searching for. More importantly, they won’t learn whether the search didn’t return results because there in fact aren’t any applicable vulnerabilities, or because they searched using the wrong character string. Humans being inherently optimistic, people are much more likely to apply the former interpretation.

To produce this blog, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome. Thanks!

To use the first example, there are two CPE names in the NVD for which the vendor field is “Microsoft Inc”. Suppose that each of these CPEs appears in three CVE Records. A user who searches for “Microsoft Inc” will learn about those six CVEs. However, if a user enters “Microsoft Inc.”, they will see the message, “There are 0 matching records.” Rather than trying “Microsoft Inc” as well, the user may assume there are no vulnerabilities that apply to products sold by an entity with “Microsoft” and “Inc” in its name, no matter what other punctuation the CPE name might contain.

It’s annoying that the NVD makes it so easy for a user to be misled about whether a product is vulnerable. However, it’s much more serious that CPE’s quirks prevent a user from ever being able to make a clear statement that there are no vulnerabilities that apply to a particular product. This is because the user can never be sure whether the message “There are 0 matching records” means they guessed the wrong CPE name or whether it means the product truly has no reported vulnerabilities.

The purl identifier eliminates this ambiguity. Today, purl is mostly used to identify open source software packages distributed through package managers like Maven Central and npm. For example, the purl “pkg:pypi/django@1.11.1” refers to the package django version 1.11.1, which is found in the PyPI package manager.[i]

If someone wishes to verify that this is the correct purl for that package, they can always do so using a simple search in pypi.org. They should never need to guess a purl name, nor to look it up in a central database (like the CPE “Dictionary” found on the NVD website).[ii]

This means that, if a user searches a vulnerability database like OSS Index using a verified purl and finds no vulnerabilities, they can be reasonably[iii] sure there have been no vulnerabilities reported to CVE.org for the package in question. In vulnerability management, the danger posed by false negative findings is much greater than that posed by false positives.

If you receive a false positive finding from a vulnerability database, the biggest problem is that you’re likely to perform work (patching) that was unnecessary. However, if you receive a false negative finding, you won’t learn about vulnerabilities that might come to bite you. Even worse, you won’t usually know about this problem.

If CPE is the only identifier available for CVE Numbering Authorities (CNAs) when they create new CVE records (as is the case today), a user is much more likely to receive a false positive finding from the NVD, and less likely to know they’re receiving it, than if the CNA can alternatively utilize purls in CVE records.

Fortunately, CVE.org is now moving toward adding purl as an alternative software identifier in CVE records, although other things need to be in place before purl can be an “equal partner” to CPE. For one thing, there need to be vulnerability databases that can properly ingest CVE records that include a purl. Fortunately, I’m sure there are at least one or two databases that should be able to do that soon after the new records become available.

This points to the need for an end-to-end proof of concept for purl in what I call the “CVE ecosystem”. The PoC would start with CNAs including purls in CVE records for open source software products

and end with users searching for – and hopefully finding – those CVEs in vulnerability databases. If you would like to participate in or support an OWASP project to do this, please email me. 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. And while you’re at it, please donate as well!


[i] Technically, PyPI is a package registry, not a package manager. However, its function in purl would be the same if it were a package manager.

[ii] That being said, a central database of purls is also being developed. This is to make it easier for people who aren’t very familiar with the purl syntax, especially when more than a few fields are required. There are also free services that will create a purl based on the user’s inputs.

[iii] I say “reasonably”, since it’s possible that the “same” vulnerability has been reported to CVE.org, yet the CPE name that was created for it didn’t have enough information to create a purl. Unfortunately, this is a common occurrence, since CPE has no field for a package manager name (sometimes the person who creates the CPE name includes the package manager in the product name. But that is obviously not part of the CPE specification). In a case like this one, OSS Index (and perhaps other open source vulnerability databases) would probably search for the package in various package managers, but success would not be certain.

Of course, this is just one example of why it is important to include purl as an optional software identifier in the CVE Program.

Sunday, May 4, 2025

“No funding issue” and other fairy stories

On April 23, Jen Easterly, former Director of CISA, put up a post on LinkedIn about the bizarre episode of April 15 and 16. On April 15, a leaked letter to CVE Board members from Yousry Barsoum, VP and Director of MITRE, revealed that the next day “the current contracting pathway for MITRE to develop, operate and modernize CVE and several other related programs…will expire.”

This led to a virtual firestorm in the software security community, since there is currently no replacement worldwide for the CVE Program; shutting it down abruptly would inevitably disrupt software security efforts worldwide. However, the next day, CISA announced “Last night, CISA executed the option period on the contract to ensure there will be no lapse in critical CVE services.” Thus, it seems the cavalry arrived in time to save the fort.

Ms. Easterly wrote a week later:

Today, CISA's Deputy Executive Assistant Director for Cybersecurity's Matt Hartman released a statement committing to the sustainment and evolution of the CVE program, including "to fostering (sic) inclusivity, active participation, and meaningful collaboration between the private sector and international governments to deliver the requisite stability and innovation to the CVE Program." The statement also clarified that there was no actual funding issue but rather an "administrative issue" that was resolved prior to a contract lapse.

In stating that “there was no actual funding issue”, she obviously intended to give comfort to her readers. After all, “administrative issues” happen all the time and don’t kill whole programs, while funding issues do kill programs. Therefore, she’s saying the worldwide alarm caused by Mr. Barsoum’s letter was misplaced. Move along…nothing to see here.

Unfortunately, this begs the question why Mr. Barsoum put out his letter and sent it to all the members (20+) of the CVE.org board, if the issue was so trivial. Why didn’t he just pick up the phone, find out what the “administrative issue” was and get it fixed? It also ignores the fact that a) many programs in the federal government have been literally cancelled overnight with no advance warning at all recently, and b) CISA is known to be in the process of letting a number of employees go, which almost always means some programs will need to be sacrificed as well.

In other words, Mr. Hartman’s assertion, and Ms. Easterly’s repetition of it, missed the main lesson of this whole sorry affair. To provide some background, the CVE Program was started from nothing in 1999. From the beginning, it was run by MITRE (in fact the idea for CVE first appeared in a paper written by two MITRE staff members that year), although it wasn’t called the “CVE Program” then. Since MITRE was already a US government contractor, it made sense for the government to engage MITRE to run the program. It can truly be said that the CVE Program might not exist at all today, were it not funded by the US government.

However, things change. Today, both governments and private industry worldwide are concerned about software vulnerabilities and rely on the CVE Program to faithfully identify and catalog those vulnerabilities. Given the worldwide use of CVE data, there is no reason why the US government should remain the sole funder of the program.

Yet that is exactly what Ms. Easterly advocates in the remainder of her post. She says, “Some parts of cybersecurity can and should be commercialized. Some should be supported by nonprofits. But vulnerability enumeration, the foundation of shared situational awareness, should be treated as a public good. This effort should be funded by the government and governed by independent stakeholders who are a balanced representation of the ecosystem, with government and industry members. CISA leading this effort as a public-private partnership assures the program is operated in service of the public interest.”

In other words, she thinks the private sector shouldn’t be funding the CVE Program, since it’s a public good that should only be funded by the public – i.e., the government (and CISA in particular). That would be wonderful if we lived in a world where the government is quite willing to fund cybersecurity initiatives and always stands behind their commitments. However, the likelihood that the CVE Program was almost shut down because – and I’m not going too far out on a limb in saying this – somebody who has no idea what it is decided it was a good candidate for defunding is in my mind prima facie evidence that its entire funding should not come from the US or any other government. 

To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome, but I will treat any donation of $25 or more as an annual subscription fee. Thanks!

But let’s suppose Mr. Hartman was correct in asserting there was no “funding issue”. In my (reasoned) opinion, that makes the case against exclusive government funding even stronger. Mr. Barsoum was clearly concerned that the CVE Program would be shut down, which strongly implies he knew the reason that might happen. If the reason was simply an administrative error – e.g., somebody forgot to check a box on some form – this means we’ll need to start worrying not only about funding cutoffs to the program, but about any administrative error that anybody at CISA, DHS, etc. might make. Does that give you a warm and fuzzy feeling?

I’m sorry, Ms. Easterly, but the CVE Program needs to be moved away from the federal government, although I hope the feds will still provide some of its funding. This doesn’t have to happen tomorrow, but it should at least be done when the contract has to be renewed next March; this is especially important since it’s quite likely the contract won’t be renewed then, anyway. If the software security community gets caught flat-footed again next year, we will have nobody to blame but ourselves. Tragedy repeated is farce.

Fortunately, the cavalry is already onsite and is planning for that eventuality. I’m referring to the CVE Foundation, a group that was already holding informal discussions before April 15, but which had not been formalized before then. When I saw the first announcement of it on April 16 – the announcement only had one name on it – I thought it might be a late April Fool’s Day prank. But the following week, it became clear that they have a great lineup of heavy hitters currently involved in the program, including CVE.org board members, heads of CVE working groups, and representatives of private industry.

Last week, this became even clearer, when I heard Pete Allor of Red Hat, CVE Board member and Co-Chair of the CVE Vulnerability Conference and Events Working Group, describe the success the CVE Foundation has had so far. They’ve lined up large companies and governments who have said they will be ready with funding when it comes time to make the break with Uncle Sam (although I certainly hope my dear Uncle will get over his hurt feelings and realize that a child leaving home because they have outgrown the need for incubation is an occasion for rejoicing, not barely concealed anger. After all, DNS was nurtured by the National Technology and Information Administration - NTIA - 40-50 years ago. When it was time to let DNS leave home, it found a truly international home in ICANN. At last report, DNS is still alive and well 😊. Perhaps CVE will find a similar home).

Fortunately, you don’t have to take my word for what Pete said. Last Thursday, Patrick Garrity of VulnCheck posted a link to an excellent podcast in which Pete went into a lot of detail on why the CVE Foundation was…well, founded, and the success they have had so far in lining up support (although he didn’t name potential financial supporters, of course). Then on Friday, Pete elaborated on what he’d said, under withering questioning by me and others at our regularly scheduled OWASP SBOM Forum meeting.

So, you don’t need to worry about whether the CVE Program will survive more than 11 months longer; the answer is yes. The real question is what changes need to be made to the program, both in the intermediate term and the longer term. Those will be interesting discussions, and I’m already trying to spark them. Stay tuned to this channel!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. And while you’re at it, please donate as well!

 

Friday, May 2, 2025

NERC CIP in the cloud: What are the real risks?

In a recent post, I described a document that was recently emailed to the “Plus list” for the NERC Standards Drafting Team (SDT) that is working on removing the barriers to full use of the cloud by NERC entities subject to CIP compliance. The well-written document, which has no official status, is a “discussion draft” of a new standard: CIP-016.

Like NERC Reliability Standards, the document includes a set of suggested requirements. Each suggested requirement loosely refers to one of the CIP standards. The author of this document assumes that CIP-016 will just apply to systems deployed in the cloud. On-premises systems will continue to be required to comply with standards CIP-002 through CIP-014, but those standards will now be understood not to apply to use of the cloud by those systems.

The suggested requirement that refers to CIP-013 reads, “The Responsible Entity shall perform risk assessments of cloud providers...This includes ensuring that all cloud providers comply with relevant security standards (e.g., SOC 2, FedRAMP).”

In other words, to comply with this suggested requirement, the NERC entity will need to:

1.      Perform a risk assessment of each cloud (service) provider, which presumably includes their Platform CSP (e.g., AWS or Azure); and

2.      “Ensure” that they comply with “security standards” like SOC 2 and FedRAMP. Neither of those is a security standard, so I’ll take the liberty of replacing those two names with “ISO 27001”, which definitely is a security standard.

In fact, these two “sub-requirements” are the same. This is because a risk assessment always needs to ascertain the subject’s compliance with a certain grouping of risks. In some cases, that grouping is called a standard; in others, it’s called a framework. Let’s say the NERC entity decides to assess the CSP based on ISO 27001. How are they going to do this?

One way for a NERC entity to assess a CSP based on ISO 27001 is to conduct a full audit; of course, the audit would need to (at least in principle) cover all the CSP’s data centers and systems. Is it likely that AWS or Azure would allow every NERC CIP customer to do this on their own, or that those customers, no matter how large, would have the resources to conduct this audit? Of course not.

The only realistic way for a NERC entity to perform a risk assessment of a CSP, based on ISO 27001 or any other true security standard, is to review the audit report and identify risks revealed in the report. For example, if the report noted a weakness in the CSP’s interactive remote access system, that would be one risk for the entity to make note of.

Since I believe CSPs will usually let customers see their cybersecurity audit reports, this would be a good way for NERC entities to assess their CSPs. However, given that there are only a small number of platform CSPs, why should each customer of “CSP A” have to request the same audit report, review it, and presumably identify a similar set of risks? Instead, why not have NERC itself – or perhaps a third party acting on NERC’s behalf – perform their own assessment of the CSP, then share the results with every NERC entity that utilizes the CSP’s services?

A word from our sponsor: To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please donate here. Any amount is welcome.

Of course, NERC wouldn’t be acting as a gatekeeper, determining whether the CSP is secure enough to merit designation as a “NERC authorized cloud provider” for entities subject to CIP compliance. Instead, it would be performing a service on behalf of many separate NERC entities. More importantly, since the CSP will know that they only need to be assessed once rather than once for every NERC CIP customer they have, they may be more open to having the assessors go beyond just an examination of the audit report.

That is, the CSP may be willing to have NERC ask them questions that are relevant to cloud providers, but are most likely not included in ISO 27001. For example, the Capital One breach in 2019 was due in part to the fact that many customers of one of the major platform CSPs had all made the same mistake when securing their environments in that CSP’s cloud. One of the CSP’s technical staff members, who had been terminated by the CSP, took revenge by breaking into – according to boasts she posted online – over 30 customers who had made the same mistake.

Of course, the fact that so many customers had made the same mistake should be taken as evidence that the CSP needed to beef up their cloud security training for their customers. Thus, one question that the NERC assessors could ask is what security training is provided to all customers at no additional cost, rather than simply being available for a fee. This question is almost certainly not included in an ISO 27001 audit.

Thus, I’m proposing that, in the new “cloud CIP” standard(s) that will be developed, NERC should be tasked with assessing cloud service providers in two ways: by reviewing their ISO 27001 audit report and by asking them questions that are most likely not asked in a normal assessment based on ISO 27001 (the current SDT should start thinking about what these questions should be).

NERC will review the audit report and the CSP’s answers to the cloud-specific questions, to identify risks that apply to this CSP; they will then pass those results to NERC entities that utilize the CSP’s services. NERC will not make any judgment on whether NERC entities can utilize the CSP’s services, or on measures that a NERC entity should take to mitigate the identified risks.

Of course, my suggestions above suffer from one little problem: NERC’s current Rules of Procedure (RoP) would never allow NERC (or even a third party engaged by NERC) to assess a CSP and share the assessment results with NERC entities. As I stated in the post I referred to earlier, I believe that accommodating use of the cloud by all NERC entities that wish to do so will require changes to the RoP – even though doing so may require an additional 1-2 years, beyond what just redrafting the CIP standards would require. This is just one example of that.

If you have comments on this post, please email me at tom@tomalrich.com. And don’t forget to donate!

Wednesday, April 30, 2025

The version range snafu


It’s no exaggeration to say that the CVE Program’s recent near-death experience has set off a flurry of activity in planning for the future of vulnerability identification programs (like CVE) and vulnerability databases (like the NVD, as well as many others). In this recent post, I described three different approaches that different groups are taking toward this goal today. Of course, none of those approaches is better than the others; they’re all necessary.

The approach I prefer to take – partly because I don’t see anyone else taking it now – is to focus on improvements to the CVE Program that can be made by next March, when the MITRE contract to run the program will come up for renewal again. Since I lead the OWASP Vulnerability Database Working Group, we are taking this approach. Instead of focusing on what’s best for the long term (which is what the broader OWASP group is doing), we’re focusing on specific improvements to the program that can be realized by next March, all of which are necessary and many of which have been discussed for a long time.

Perhaps the most important of those improvements is version ranges in software identifiers. Software vulnerabilities are seldom found in a single version of a product. Instead, a vulnerability is first present in version A and continues to be present in all versions up to version B. The vulnerability is often first identified in B; when it’s identified, the investigators then realize it has been present in the product since version A; then they identify the entire range A to B as vulnerable.

For this reason, many CVE Records identify a range of versions, rather than just a single version or multiple distinct versions, as vulnerable to the CVE; however, this identification is made in the text of the record, not in the machine-readable CPE identifier that may or may not be included in the record.

This omission isn’t the fault of CPE, since CPE provides the capability to identify a version range as vulnerable, not just a single version. However, this capability is not used very often, for the simple reason that there is little or no tooling available that allows end users to take advantage of a version range included in a CPE name. The same goes for the purl identifier, which is widely used in open source vulnerability databases. Even though purl in theory supports the VERS specification of version ranges, in practice it is seldom used, due to the lack of end user tooling.

Why is there very little (or even no) end user tooling that can take advantage of version ranges in software identifiers found in vulnerability records? I learned the answer to that question when I asked vulnerability management professionals what advantage having such a capability in their tools would provide to end users (i.e., what the use case is for machine-readable descriptions of version ranges).

When I have asked this question, few if any of these professionals have even been able to describe what that advantage would be. It seems clear to me that, if few people can even articulate why a particular capability is required, tool developers are unlikely to try to include that capability in their products.

However, I can at least articulate how an end user organization could utilize version ranges included in a vulnerability notification like a CVE record: They will use it when a) a vulnerability has been identified in a range of versions of Product ABC, and b) the organization utilizes one or more versions of ABC and wants to know whether the version(s) they use is vulnerable to the CVE described in the notification.

Of course, in many or even most cases, the answer to this question is easily obtained. For example, if the product ABC version range included in the record for CVE-2025-12345 is 2.2 to 3.4 and the organization uses version 2.5, there’s no question that it falls within the range. But how about when the version in question is

1.      Version 2.5a?

2.      Version 3.1.1?

3.      Version 3.41?

More generally, the question is, “Of all the instances of Product ABC running in our organization, which ones are vulnerable to CVE-2025-12345?” Ideally, an automated tool would a) interpret a version range described in a CPE found in the CVE record, b) compare that interpretation with every instance of ABC found on the organization’s network, and c) quickly determine which instances are vulnerable and which are not.

How can the inherently ambiguous position of the three version strings listed above be resolved? The supplier of the product needs to follow a specific “ordering rule” when they assign version numbers to products; moreover, they need to inform their customers – as well as other organizations that need to know this – what that rule is. The portion of the rule that applies to each of the above strings might be

1.      “A version string that includes a number, but not a letter, precedes a string that includes the same number but includes a letter as well.”

2.      “The value of the first two digits in the version string determines whether that string precedes or follows any other string(s).”

3.      “The precedence of the version string is always determined by the value of the string itself.”

Of course, for an end user tool to properly interpret each version range, it would need access to the supplier’s ordering rule. If these were sufficiently standardized, rather than always being custom created, it might be possible to create a tool that would always properly interpret a version range.[i] However, they are not standardized now.

This means that the developer of an end user tool that can answer the question whether a particular version falls within a range will need to coordinate with the supplier of every product that might be scanned or otherwise addressed by their tool, to make sure they have the most recent version of their ordering rule; and they’ll have to receive every updated version of that rule. Doing this would be a nightmare and is therefore not likely to happen.

This would be much less of a nightmare if the ordering rules were standardized, along with the process by which they’re created and updated by suppliers, as well as utilized by end users and their service providers. However, that will require a lot of work and coordination. It’s not likely to happen very soon.

Ironically, all the progress that has been made in version range specification has been on the supplier side. A lot of work has gone into making sure that CPEs and purls (and other products like SBOM formats) are able to specify version ranges in a manner that is easily understandable by human users. However, that progress is mostly for naught, given that the required tooling on the end user side is probably years away, due to the current lack of standards for creating and utilizing ordering rules.

Unfortunately, I have to say it’s probably a waste of time to spend too much time today on specifying version ranges on the supplier end. The best way to get version ranges moving is probably to get a group together to develop specs for ordering rules.

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] If you are a fan of “semantic versioning” – a versioning scheme often used in open source projects - you might think that “ordering rules” are a primitive workaround. After all, if all software suppliers followed semantic versioning, they would all be in effect following the same ordering rule. However, commercial suppliers often decide that semantic versioning is too restricting, since if only allows a fixed number of versions between two endpoint versions.

Often, a commercial supplier will want to identify patched versions, upgrade versions, or even build numbers as separate versions. Semantic versioning provides three fields - X, Y and Z - in the version string “X.Y.Z”; moreover, the three fields have different meanings (major, minor and patch respectively), so one field can’t “overflow” into its neighbor. While open source projects may not find these three fields to be too limiting, commercial suppliers sometimes want more than three fields.

Tuesday, April 29, 2025

I need your help!

Since I started this blog in 2013, I’ve never asked for donations to support my work. However, because of a recent financial change, I’m now doing exactly that. I’m not looking for large donations - just a lot of smaller ones! But large or small, all donations are welcome. Please read this and consider donating. 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Saturday, April 26, 2025

NERC CIP: We’re as far from the cloud as ever

This past Wednesday, the NERC “Risk Management for Third-Party Cloud Services” Standards Drafting Team (SDT) emailed a document to the “Plus List”, which seems to be a starting point for discussions of a new CIP-016 standard to address the problems with use of the cloud by NERC entities.

While I admit I have not been able to attend any of the SDT meetings for months, and while I also appreciate that the team[i] is anxious to create something – something! – that moves them forward, I regret to say I don’t think this document moves them forward at all. Here are the main reasons why I say that.

The primary problem is that the draft standard is written like a NIST framework. That is, it seems to assume that the NERC auditors will audit its requirements in the same way that a federal agency audits itself for compliance with NIST 800-53. For example, control AC-1 in 800-53 reads:

The organization:

a. Develops, documents, and disseminates to [Assignment: organization-defined personnel or roles]:

1. An access control policy that addresses purpose, scope, roles, responsibilities, management commitment, coordination among organizational entities, and compliance; and

2. Procedures to facilitate the implementation of the access control policy and associated access controls; and

b. Reviews and updates the current: 1. Access control policy [Assignment: organization-defined frequency]; and 2. Access control procedures [Assignment: organization-defined frequency].

This requirement assumes that:

       i.          It is generally clear to both the auditor and auditee what an access control policy should contain. More specifically, it is clear to both parties what “purpose, scope, roles, responsibilities, management commitment, coordination among organizational entities, and compliance” should be addressed by the policy.

      ii.          Both auditor and auditee generally understand what procedures are required to “facilitate the implementation of the access control policy and associated access controls.”

     iii.          Auditor and auditee generally agree on what constitutes an adequate “review and update” of access control policies and procedures. For example, the auditor isn’t expecting the auditee to rewrite the policy from the ground up, and the auditee isn’t expecting to get away with skimming through the policy and just giving it their stamp of approval.

As far as most federal government agencies are concerned, the above three assumptions may well be valid. However, I strongly doubt they’re valid for NERC entities, who usually take the “Trust in Allah, but tie your camel” approach to dealing with auditors. Specifically, I know that one reason some of the NERC CIP requirements are very prescriptive is that NERC entities are afraid of requirements that give the auditors leeway in determining what a requirement means. Moreover, the auditors often share this fear, since they don’t want to be blamed for misinterpreting CIP requirements. Therefore, they usually want CIP requirements that constrain them enough that there can’t be much dispute over how a requirement should be interpreted.

However, while keeping in mind that this document is just a discussion draft and will never get beyond that stage, it’s important to note how it will likely result in many auditing controversies if it were to become a standard. Here are three examples:

1. There are many statements that are clearly open to big differences in interpretation. For example, Section 2.2 Scope reads, “CIP-016 applies to any systems, applications, or data stored, processed, or transmitted in cloud environments or hybrid cloud infrastructures. Systems that remain fully on-premise are not subject to this standard.”

Don’t “systems that remain fully on-premise(s)” often use “applications, or data stored, processed, or transmitted in cloud environments or hybrid cloud infrastructures”? If that use isn’t subject to the standard, then what is? Yet, by saying that on-prem systems aren’t subject to complying with CIP-016, it sounds like they’re immune to threats that come through use of the cloud.

Is it really true that only systems that are themselves located in the cloud (which today includes 0 high or medium impact systems) are affected by cloud-based threats? If so, that seems like a great argument for permanently prohibiting BES Cyber Systems, EACMS and PACS from being located in the cloud. Of course, that’s exactly the situation we have today. Why bother with changing the standards at all, since today they effectively prohibit use of the cloud by entities with high and medium impact systems?

2. The draft standard relies heavily on 20-25 new terms, each of which would have to be debated and voted on - then approved by FERC - before the standard could be enforced. I remember the heated debates over the relatively small number of new terms introduced with CIP version 5, especially the words “programmable” in “Cyber Asset” and “routable” in “External Routable Connectivity”. The debates over these two words were probably more heated than the debates over all the v5 requirements put together. Moreover, the debates over those two words literally went on for years; they were never resolved with a new definition.

The lesson of that experience is that it doesn’t help to “clarify” a requirement by introducing new Glossary terms, unless those terms are already widely understood. This is especially the case when a new Glossary term itself introduces new terms. For example, the undefined new term “Cloud Perimeter Control Plane” in the draft CIP-016 includes another undefined new term, “virtual security perimeter”. Both terms will need to be debated and balloted multiple times, should they be included in an actual draft of CIP-016.[ii]

3. One interesting requirement is R12, which is described as “CIP-013 equivalent”. It reads:

The Responsible Entity shall perform risk assessments of cloud providers and ensure that supply chain risks, including third-party vendors and subcontractors, are mitigated. This includes ensuring that all cloud providers comply with relevant security standards (e.g., SOC 2, FedRAMP).

My first reaction is that this is going to require the CSP to have a huge amount of involvement with each Responsible Entity customer. This includes:

       i.          Sharing information on their vendors and subcontractors, so the RE can “ensure” (a dangerous word to include in a mandatory requirement!) that those risks have been “mitigated”. How will the RE do this? Surely not by auditing each of the CSP’s thousands of vendors and subcontractors!

      ii.          Providing the RE with enough information that they can “ensure” the CSP complies with relevant security standards. Of course, the CSP should already have evidence of “compliance” with SOC2 and FedRAMP – although neither of those is a standard subject to compliance (a better example would be ISO 27001).

     iii.          However, the words “all cloud providers” will normally include more than the platform CSP (e.g., AWS or Azure). They also include any entity that provides services in the cloud – for example, SaaS providers, security service providers, etc. Is the Responsible Entity really going to have to ensure that each of these cloud providers “complies” with SOC 2 and FedRAMP, to say nothing of other “relevant security standards?”

Of course, this document is just meant to be the start of a discussion, so it would be unfair to treat it as if it were a draft of a proposed new standard. However, I think there is one overarching lesson to be taken away from this (which I have pointed out multiple times before): Any attempt to address the cloud in one or more NERC CIP standards is inevitably going to require changes to how the standards are audited. These changes will in turn require changes to the NERC Rules of Procedure and especially CMEP (the Compliance Monitoring and Enforcement Program).

Because of this, any draft of a new CIP standard(s) to address use of the cloud needs to include a discussion of what changes to the Rules of Procedure (RoP) and CMEP are required for the new requirements to be auditable. The primary RoP change that will be needed – and it has been needed for years – is a description of how risk-based requirements can be audited[iii]. There is no way that non-risk-based CIP requirements will ever work in the cloud.

Moreover, the process of making the RoP changes needs to get underway as soon as possible after the new standard(s) is drafted. RoP changes rarely happen, but it’s likely these changes will take at least a couple of years by themselves. Since I’m already saying that the CIP changes alone won’t come into effect before 2031 and since it’s possible the RoP changes will not start until the CIP changes have been approved by FERC, this means it might be 2032 or even 2033 before the entire package of both CIP and RoP changes is in place. Wouldn’t that be depressing?

It certainly would be depressing, but I’ll point out that it’s not likely the NERC CIP community will need to wait until 2033, 2032, or even 2031 for new “cloud CIP” standards to be in place. It’s possible they’ll come sooner than that, mainly because NERC could be forced to take a shortcut. There’s at least one “In case of fire, break glass” provision in the RoP, which allows NERC – at the direction of the Board of Trustees – to accelerate the standards development process, in the case where the lack of a standard threatens to damage BES reliability itself.

Needless to say, this provision has never been invoked (at least not regarding the CIP standards). However, the time when it’s needed may be fast approaching. See this post. 

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you are involved with NERC CIP compliance and would like to discuss issues related to “cloud CIP”, please email me at tom@tomalrich.com.


[i] The email made it clear that this is primarily the product of one team member, so it has no official status.

[ii] Of course, there’s no assurance now that the new “cloud CIP” standards will include a CIP-016 that looks anything like this one.

[iii] NERC’s term for this is “objectives-based”. They are basically equivalent.