Wednesday, April 30, 2025

The version range snafu


It’s no exaggeration to say that the CVE Program’s recent near-death experience has set off a flurry of activity in planning for the future of vulnerability identification programs (like CVE) and vulnerability databases (like the NVD, as well as many others). In this recent post, I described three different approaches that different groups are taking toward this goal today. Of course, none of those approaches is better than the others; they’re all necessary.

The approach I prefer to take – partly because I don’t see anyone else taking it now – is to focus on improvements to the CVE Program that can be made by next March, when the MITRE contract to run the program will come up for renewal again. Since I lead the OWASP Vulnerability Database Working Group, we are taking this approach. Instead of focusing on what’s best for the long term (which is what the broader OWASP group is doing), we’re focusing on specific improvements to the program that can be realized by next March, all of which are necessary and many of which have been discussed for a long time.

Perhaps the most important of those improvements is version ranges in software identifiers. Software vulnerabilities are seldom found in a single version of a product. Instead, a vulnerability is first present in version A and continues to be present in all versions up to version B. The vulnerability is often first identified in B; when it’s identified, the investigators then realize it has been present in the product since version A; then they identify the entire range A to B as vulnerable.

For this reason, many CVE Records identify a range of versions, rather than just a single version or multiple distinct versions, as vulnerable to the CVE; however, this identification is made in the text of the record, not in the machine-readable CPE identifier that may or may not be included in the record.

This omission isn’t the fault of CPE, since CPE provides the capability to identify a version range as vulnerable, not just a single version. However, this capability is not used very often, for the simple reason that there is little or no tooling available that allows end users to take advantage of a version range included in a CPE name. The same goes for the purl identifier, which is widely used in open source vulnerability databases. Even though purl in theory supports the VERS specification of version ranges, in practice it is seldom used, due to the lack of end user tooling.

Why is there very little (or even no) end user tooling that can take advantage of version ranges in software identifiers found in vulnerability records? I learned the answer to that question when I asked vulnerability management professionals what advantage having such a capability in their tools would provide to end users (i.e., what the use case is for machine-readable descriptions of version ranges).

When I have asked this question, few if any of these professionals have even been able to describe what that advantage would be. It seems clear to me that, if few people can even articulate why a particular capability is required, tool developers are unlikely to try to include that capability in their products.

However, I can at least articulate how an end user organization could utilize version ranges included in a vulnerability notification like a CVE record: They will use it when a) a vulnerability has been identified in a range of versions of Product ABC, and b) the organization utilizes one or more versions of ABC and wants to know whether the version(s) they use is vulnerable to the CVE described in the notification.

Of course, in many or even most cases, the answer to this question is easily obtained. For example, if the product ABC version range included in the record for CVE-2025-12345 is 2.2 to 3.4 and the organization uses version 2.5, there’s no question that it falls within the range. But how about when the version in question is

1.      Version 2.5a?

2.      Version 3.1.1?

3.      Version 3.41?

More generally, the question is, “Of all the instances of Product ABC running in our organization, which ones are vulnerable to CVE-2025-12345?” Ideally, an automated tool would a) interpret a version range described in a CPE found in the CVE record, b) compare that interpretation with every instance of ABC found on the organization’s network, and c) quickly determine which instances are vulnerable and which are not.

How can the inherently ambiguous position of the three version strings listed above be resolved? The supplier of the product needs to follow a specific “ordering rule” when they assign version numbers to products; moreover, they need to inform their customers – as well as other organizations that need to know this – what that rule is. The portion of the rule that applies to each of the above strings might be

1.      “A version string that includes a number, but not a letter, precedes a string that includes the same number but includes a letter as well.”

2.      “The value of the first two digits in the version string determines whether that string precedes or follows any other string(s).”

3.      “The precedence of the version string is always determined by the value of the string itself.”

Of course, for an end user tool to properly interpret each version range, it would need access to the supplier’s ordering rule. If these were sufficiently standardized, rather than always being custom created, it might be possible to create a tool that would always properly interpret a version range.[i] However, they are not standardized now.

This means that the developer of an end user tool that can answer the question whether a particular version falls within a range will need to coordinate with the supplier of every product that might be scanned or otherwise addressed by their tool, to make sure they have the most recent version of their ordering rule; and they’ll have to receive every updated version of that rule. Doing this would be a nightmare and is therefore not likely to happen.

This would be much less of a nightmare if the ordering rules were standardized, along with the process by which they’re created and updated by suppliers, as well as utilized by end users and their service providers. However, that will require a lot of work and coordination. It’s not likely to happen very soon.

Ironically, all the progress that has been made in version range specification has been on the supplier side. A lot of work has gone into making sure that CPEs and purls (and other products like SBOM formats) are able to specify version ranges in a manner that is easily understandable by human users. However, that progress is mostly for naught, given that the required tooling on the end user side is probably years away, due to the current lack of standards for creating and utilizing ordering rules.

Unfortunately, I have to say it’s probably a waste of time to spend too much time today on specifying version ranges on the supplier end. The best way to get version ranges moving is probably to get a group together to develop specs for ordering rules.

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] If you are a fan of “semantic versioning” – a versioning scheme often used in open source projects - you might think that “ordering rules” are a primitive workaround. After all, if all software suppliers followed semantic versioning, they would all be in effect following the same ordering rule. However, commercial suppliers often decide that semantic versioning is too restricting, since if only allows a fixed number of versions between two endpoint versions.

Often, a commercial supplier will want to identify patched versions, upgrade versions, or even build numbers as separate versions. Semantic versioning provides three fields - X, Y and Z - in the version string “X.Y.Z”; moreover, the three fields have different meanings (major, minor and patch respectively), so one field can’t “overflow” into its neighbor. While open source projects may not find these three fields to be too limiting, commercial suppliers sometimes want more than three fields.

Tuesday, April 29, 2025

I need your help!

Since I started this blog in 2013, I’ve never asked for donations to support my work. However, because of a recent financial change, I’m now doing exactly that. I’m not looking for large donations - just a lot of smaller ones! But large or small, all donations are welcome. Please read this and consider donating. 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Saturday, April 26, 2025

NERC CIP: We’re as far from the cloud as ever

This past Wednesday, the NERC “Risk Management for Third-Party Cloud Services” Standards Drafting Team (SDT) emailed a document to the “Plus List”, which seems to be a starting point for discussions of a new CIP-016 standard to address the problems with use of the cloud by NERC entities.

While I admit I have not been able to attend any of the SDT meetings for months, and while I also appreciate that the team[i] is anxious to create something – something! – that moves them forward, I regret to say I don’t think this document moves them forward at all. Here are the main reasons why I say that.

The primary problem is that the draft standard is written like a NIST framework. That is, it seems to assume that the NERC auditors will audit its requirements in the same way that a federal agency audits itself for compliance with NIST 800-53. For example, control AC-1 in 800-53 reads:

The organization:

a. Develops, documents, and disseminates to [Assignment: organization-defined personnel or roles]:

1. An access control policy that addresses purpose, scope, roles, responsibilities, management commitment, coordination among organizational entities, and compliance; and

2. Procedures to facilitate the implementation of the access control policy and associated access controls; and

b. Reviews and updates the current: 1. Access control policy [Assignment: organization-defined frequency]; and 2. Access control procedures [Assignment: organization-defined frequency].

This requirement assumes that:

       i.          It is generally clear to both the auditor and auditee what an access control policy should contain. More specifically, it is clear to both parties what “purpose, scope, roles, responsibilities, management commitment, coordination among organizational entities, and compliance” should be addressed by the policy.

      ii.          Both auditor and auditee generally understand what procedures are required to “facilitate the implementation of the access control policy and associated access controls.”

     iii.          Auditor and auditee generally agree on what constitutes an adequate “review and update” of access control policies and procedures. For example, the auditor isn’t expecting the auditee to rewrite the policy from the ground up, and the auditee isn’t expecting to get away with skimming through the policy and just giving it their stamp of approval.

As far as most federal government agencies are concerned, the above three assumptions may well be valid. However, I strongly doubt they’re valid for NERC entities, who usually take the “Trust in Allah, but tie your camel” approach to dealing with auditors. Specifically, I know that one reason some of the NERC CIP requirements are very prescriptive is that NERC entities are afraid of requirements that give the auditors leeway in determining what a requirement means. Moreover, the auditors often share this fear, since they don’t want to be blamed for misinterpreting CIP requirements. Therefore, they usually want CIP requirements that constrain them enough that there can’t be much dispute over how a requirement should be interpreted.

However, while keeping in mind that this document is just a discussion draft and will never get beyond that stage, it’s important to note how it will likely result in many auditing controversies if it were to become a standard. Here are three examples:

1. There are many statements that are clearly open to big differences in interpretation. For example, Section 2.2 Scope reads, “CIP-016 applies to any systems, applications, or data stored, processed, or transmitted in cloud environments or hybrid cloud infrastructures. Systems that remain fully on-premise are not subject to this standard.”

Don’t “systems that remain fully on-premise(s)” often use “applications, or data stored, processed, or transmitted in cloud environments or hybrid cloud infrastructures”? If that use isn’t subject to the standard, then what is? Yet, by saying that on-prem systems aren’t subject to complying with CIP-016, it sounds like they’re immune to threats that come through use of the cloud.

Is it really true that only systems that are themselves located in the cloud (which today includes 0 high or medium impact systems) are affected by cloud-based threats? If so, that seems like a great argument for permanently prohibiting BES Cyber Systems, EACMS and PACS from being located in the cloud. Of course, that’s exactly the situation we have today. Why bother with changing the standards at all, since today they effectively prohibit use of the cloud by entities with high and medium impact systems?

2. The draft standard relies heavily on 20-25 new terms, each of which would have to be debated and voted on - then approved by FERC - before the standard could be enforced. I remember the heated debates over the relatively small number of new terms introduced with CIP version 5, especially the words “programmable” in “Cyber Asset” and “routable” in “External Routable Connectivity”. The debates over these two words were probably more heated than the debates over all the v5 requirements put together. Moreover, the debates over those two words literally went on for years; they were never resolved with a new definition.

The lesson of that experience is that it doesn’t help to “clarify” a requirement by introducing new Glossary terms, unless those terms are already widely understood. This is especially the case when a new Glossary term itself introduces new terms. For example, the undefined new term “Cloud Perimeter Control Plane” in the draft CIP-016 includes another undefined new term, “virtual security perimeter”. Both terms will need to be debated and balloted multiple times, should they be included in an actual draft of CIP-016.[ii]

3. One interesting requirement is R12, which is described as “CIP-013 equivalent”. It reads:

The Responsible Entity shall perform risk assessments of cloud providers and ensure that supply chain risks, including third-party vendors and subcontractors, are mitigated. This includes ensuring that all cloud providers comply with relevant security standards (e.g., SOC 2, FedRAMP).

My first reaction is that this is going to require the CSP to have a huge amount of involvement with each Responsible Entity customer. This includes:

       i.          Sharing information on their vendors and subcontractors, so the RE can “ensure” (a dangerous word to include in a mandatory requirement!) that those risks have been “mitigated”. How will the RE do this? Surely not by auditing each of the CSP’s thousands of vendors and subcontractors!

      ii.          Providing the RE with enough information that they can “ensure” the CSP complies with relevant security standards. Of course, the CSP should already have evidence of “compliance” with SOC2 and FedRAMP – although neither of those is a standard subject to compliance (a better example would be ISO 27001).

     iii.          However, the words “all cloud providers” will normally include more than the platform CSP (e.g., AWS or Azure). They also include any entity that provides services in the cloud – for example, SaaS providers, security service providers, etc. Is the Responsible Entity really going to have to ensure that each of these cloud providers “complies” with SOC 2 and FedRAMP, to say nothing of other “relevant security standards?”

Of course, this document is just meant to be the start of a discussion, so it would be unfair to treat it as if it were a draft of a proposed new standard. However, I think there is one overarching lesson to be taken away from this (which I have pointed out multiple times before): Any attempt to address the cloud in one or more NERC CIP standards is inevitably going to require changes to how the standards are audited. These changes will in turn require changes to the NERC Rules of Procedure and especially CMEP (the Compliance Monitoring and Enforcement Program).

Because of this, any draft of a new CIP standard(s) to address use of the cloud needs to include a discussion of what changes to the Rules of Procedure (RoP) and CMEP are required for the new requirements to be auditable. The primary RoP change that will be needed – and it has been needed for years – is a description of how risk-based requirements can be audited[iii]. There is no way that non-risk-based CIP requirements will ever work in the cloud.

Moreover, the process of making the RoP changes needs to get underway as soon as possible after the new standard(s) is drafted. RoP changes rarely happen, but it’s likely these changes will take at least a couple of years by themselves. Since I’m already saying that the CIP changes alone won’t come into effect before 2031 and since it’s possible the RoP changes will not start until the CIP changes have been approved by FERC, this means it might be 2032 or even 2033 before the entire package of both CIP and RoP changes is in place. Wouldn’t that be depressing?

It certainly would be depressing, but I’ll point out that it’s not likely the NERC CIP community will need to wait until 2033, 2032, or even 2031 for new “cloud CIP” standards to be in place. It’s possible they’ll come sooner than that, mainly because NERC could be forced to take a shortcut. There’s at least one “In case of fire, break glass” provision in the RoP, which allows NERC – at the direction of the Board of Trustees – to accelerate the standards development process, in the case where the lack of a standard threatens to damage BES reliability itself.

Needless to say, this provision has never been invoked (at least not regarding the CIP standards). However, the time when it’s needed may be fast approaching. See this post. 

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you are involved with NERC CIP compliance and would like to discuss issues related to “cloud CIP”, please email me at tom@tomalrich.com.


[i] The email made it clear that this is primarily the product of one team member, so it has no official status.

[ii] Of course, there’s no assurance now that the new “cloud CIP” standards will include a CIP-016 that looks anything like this one.

[iii] NERC’s term for this is “objectives-based”. They are basically equivalent.

Friday, April 25, 2025

Maybe it’s not so bad after all


Earlier this week, I wrote a post that pointed to the strong likelihood that the MITRE contract to run the CVE Program will not be renewed next March (even though it was renewed last week, despite an initial announcement that it might not be); I called for planning to start now to figure out what can replace it. I did this on the assumption that there is no group doing that already. However, it turns out I was wrong.

A friend emailed me yesterday to ask if I knew about the CVE Foundation. The initial news reports about the contract being terminated pointed to this group as one that said they would be able to take over if the termination actually happened. However, since the reports only named one individual (the reports said there were other people involved, but they weren’t ready to share their names), I didn’t know how much credence to put in their assertion.

It turns out that I should have kept following the story. Yesterday, the same friend pointed me to the list of names now found in the FAQ. Some of the most important members of the CVE Program are shown as participants (the three leading private industry representatives on the CVE Board are listed as officers of the corporation); this is clearly a serious organization.

In addition, my friend said the group has good reason to believe that, should the MITRE contract not be renewed next March, the necessary funding will be there for them to fund the program (including MITRE, of course) on their own.

Of course, this is good news. I’ll also add that there are still a lot of questions that should be answered to make the CVE Program better. However, in asking and answering those questions, we at least won’t have to worry about the CVE Program disappearing beneath our feet. 

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. background on the book and the link to order it, see this post.

 

Thursday, April 24, 2025

Meanwhile, back at the NVD

While the big news in the vulnerability management world last week was the near death of the CVE Program, this temporarily overshadowed the ongoing saga of the National Vulnerability Database (NVD). Since February 12, 2024, the NVD has stopped reliably performing one of its most important functions: adding CPE names (machine readable software identifiers) to new CVE (vulnerability) records. For a discussion of why having a CPE name with every CVE Record is so important, see this post.

At the end of December, in the post I just linked, I estimated that the NVD’s backlog of CVE records without CPE names was around 22,000, or 55% of the approximately 40,000 new CVE Records created in 2024. In my most recent post on the NVD’s problems written on March 19, I admitted I couldn’t estimate the backlog, although I noted that the “vulnerability historian” Brian Martin thought the NVD had completely stopped creating new CPE names altogether.

Brian has kept following the NVD (which he says has “returned”). Last week, he put up this post on LinkedIn. It illustrates how the NVD has been doing its best to disguise the huge backlog of “unenriched” CVE Records (i.e., those that have not had CPE names and CVSS scores added to them – both of which are NVD functions). Without going into details, Brian said the backlog of unenriched CVEs (since early 2024) was now 33,699. So, far from making progress getting rid of the backlog in 2025, the NVD has dug the hole deeper.

Of course, the backlog number would be more meaningful if it’s expressed as a percentage of new CVE Records published since early 2024. Since the full year 2024 number of new records was about 40,000 and we recently finished the first quarter of 2025, I estimate there have been 50,000 new CVEs published since early 2024. This means that the 33,699 backlog constitutes 67% of the new CVE Records published since the NVD started having their problems last February 12.

In other words, the backlog as a percentage of new CVE records has grown by 12%. This obviously discredits the NVD’s preferred excuse for their problems: The volume of new CVE records has jumped and they’re struggling to keep up with it. That might explain the growth in the backlog itself, but it doesn’t explain a significant increase (in just 3 months!) in the percentage of CVE records that are unenriched (i.e., are in the backlog).

So what’s the NVD’s plan for finally eliminating this backlog? The last time they said anything about this was March 19, when they commented on their website:

We are currently processing incoming CVEs at roughly the rate we had sustained prior to the processing slowdown in spring and early summer of 2024. However, CVE submissions increased 32 percent in 2024, and that prior processing rate is no longer sufficient to keep up with incoming submissions. As a result, the backlog is still growing.

We anticipate that the rate of submissions will continue to increase in 2025. The fact that vulnerabilities are increasing means that the NVD is more important than ever in protecting our nation’s infrastructure. However, it also points to increasing challenges ahead.

To address these challenges, we are working to increase efficiency by improving our internal processes, and we are exploring the use of machine learning to automate certain processing tasks.

There’s one phrase in this statement that I strongly agree with: “the NVD is more important than ever in protecting our nation’s infrastructure.” That’s why this whole debacle is so appalling.

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, April 23, 2025

March 2026

There’s widespread agreement within the vulnerability management community that the CVE Program’s near-death experience last week is a wake-up call to do something, although there isn’t agreement on what “something” is. I now see three possible courses of action. They are all valid, but they differ according to the time horizon involved:

1. Some people are focused on making sure the CVE Program, as currently constituted, can survive the many attacks that are likely to be aimed at it in the next year.  While there might be some tweaks made, the point is to keep the program running, not to make substantial changes. I support the survival of the current CVE Program, and if survival is the best that can be hoped for (although hopefully with the CVE Record Format amended to permit purl identifiers, along with CPEs), I can’t argue with that outcome.

2. Other people – notably the OWASP Board - are taking exactly the opposite approach: They point out that software vulnerabilities as currently conceived form just one portion of “the demands of a rapidly evolving global threat landscape.” They’re calling for redesigning the CVE Program as a federated model that will address new threats like weak cryptography and AI weaknesses. My idea for the Global Vulnerability Database is very like this, especially in that it calls for a federated approach. I will definitely participate in this effort, since it is what is needed in the long run. However, there’s no doubt this will require a years-long effort.

3. A third group of people, which includes me, is more focused on March 2026. That’s when the MITRE contract will need to be renewed again. We all hope the contract will be automatically renewed, but after the events last week we would be fools to assume that will happen. Instead, we need to assume the contract won’t be renewed. This means that in March 2026, we will need to have an alternative to the CVE Program specified and ready to implement.

I think there are two realistic alternatives to the current CVE Program:

       i.          A program that adheres as closely as possible to the current CVE Program, warts and all.

      ii.          A program that follows the outline of the current program but incorporates changes that have been discussed and planned beforehand. In other words, rather than just implementing a CVE program much like the one we have in place now, we should plan ahead for March 2026, so that we implement improvements that can’t be made while the current program is up and running. Doing that isn’t as satisfying as redesigning the program from the ground up, but it can at least be accomplished by next March.

Therefore, I’m suggesting that the vulnerability management community start discussing “intermediate term” questions like the following:

a.      Should the organization that identifies vulnerabilities (e.g., the CVE Program) be separate from the vulnerability database (e.g., the NVD), as is the case now? I should point out that this separation is unusual in the vulnerability database world, since most other vulnerability databases (besides the ones that are modeled on the NVD) are focused on open source software and curate their own vulnerabilities. On the other hand, none of these other databases comes close to approximating the scale of the CVE Program and the NVD.

b.      The CVE Program is there to serve end user organizations, but with 290,000 CVEs today, the only effective way to do that is through automation of the end user’s VM processes. How can the CVE Program focus on end-to-end automated vulnerability management for user organizations? Machine readable vulnerability notifications, prepared by or on behalf of suppliers today, can specify individual versions or version ranges. However, can today’s end user vulnerability management systems utilize version range notifications in an automated manner? If so, what goal will they accomplish by doing so?  

c.      Since the user organization usually has the choice of whether to apply a patch, patches can’t be represented the same as actual versions. On the other hand, vulnerability records need to be able to represent which previous patches have been applied. How can these contradictory goals both be accomplished in a machine readable fashion?

d.      Can vulnerability identifiers, such as CVEs and GitHub Security Advisories (GHSA) be “harmonized”?

e.      Can software identifiers, including CPE and purl, be harmonized?

f.       If neither vulnerability nor software identifiers can be harmonized, how can vulnerability databases be “federated”? That is, how can vulnerability databases that use different identifiers respond jointly to a single query? This is a fundamental question that needs to be answered before the Global Vulnerability Database can be designed.[i]

There are also important questions that need to be asked about intelligent devices.

g.      How can devices be identified in vulnerability databases? There are many problems with CPE identifiers for devices, besides the well-documented problem that since last February, CPE numbers have not been produced by the NVD in anywhere near the volume they should be; this means that most devices identified as vulnerable in a CVE Record created after February 2024 will be invisible to an automated search today.

h.      The SBOM Forum’s 2022 white paper on software identification in the NVD suggested (on pages 12 and 13) that two of the standards from the GS1 family, GTIN and GMN, could be utilized as device identifiers, since they are already widely used in international trade. There are other options as well, but there needs to be discussion of this question, especially since the US Federal Trade Commission (FTC) is now at least proposing to implement a “device cybersecurity labeling program” for IoT devices. It’s hard to discuss cybersecurity for IoT without being able to learn about software and firmware vulnerabilities that apply to a device.

i.       How should vulnerabilities be reported for intelligent devices? Should they be reported using the identifiers for the individual software and firmware products installed in the device or using the identifier for the device itself? The latter option makes it much easier for users to learn about vulnerabilities that affect devices they rely on, since the user doesn’t need to have an up-to-date software bill of materials (SBOM) for each device they operate. This is the option that some big device manufacturers like Cisco and Schneider Electric have chosen to follow. However, it seems the great majority of intelligent device makers, including medical device makers, don’t report vulnerabilities to the CVE Program at all, making their devices invisible to NVD users.[ii]

j.       Speaking of intelligent devices, there’s a fundamental contradiction at the heart of patching them. In most cases, a device user is not able to apply a patch for a single vulnerability or subset of vulnerabilities; instead, they need to wait for the next full device update from the manufacturer.

k.      Because of this, the device manufacturer might delay notification of a vulnerability if the next full update is not imminent, even though they have developed the patch for it. However, delaying the notification will leave users unaware that their device is affected by the vulnerability. This means they are unlikely to apply other mitigations, like removing the device from their network or isolating it on its own segment. This problem almost certainly doesn’t have a simple answer, but it at least needs to be brought into the open, so that users can be aware of it.

My point in this post is that, while the fundamentals of vulnerabilities and vulnerability databases need to be rethought in the long run, there’s also a need to consider intermediate-term questions like the ones described above. As many of these questions as possible need to be answered by March 2026, since it’s quite possible that the vulnerability management community will find itself in a real crisis then (not just a 24-hour one). It would be good to be able to implement some solid changes to the current CVE Program, even though they’re not the ones we would implement if we were given 1-3 more years to do so.

Would you like to participate in these discussions? The OWASP SBOM Forum sponsors a Vulnerability Database Working Group that meets every other Tuesday at 11AM Eastern Time (April 29 is the next meeting); this group discusses intermediate-term questions like these. And the SBOM Forum itself meets on Fridays at 1PM ET (May 2 is the next meeting). That group discusses lots of ideas, including long-term ones. Drop me an email if you would likean invitation.l

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] I am sure this question can be answered. AI will probably play a large role in that answer.

[ii] CISA maintains a database of vulnerabilities for industrial devices and medical devices. However, since the vulnerability identifiers are not CVEs, a user of the NVD or another CVE-based database will not usually learn of those vulnerabilities, unless the manufacturer has also submitted the vulnerability to the CVE Program.

Sunday, April 20, 2025

Preparing for the Global Vulnerability Database

The near-death experience of the CVE Program (aka “MITRE”) last week was a huge wakeup call for the international vulnerability management ecosystem; this is because what was at stake with CVE.org’s problems was far worse than what’s at stake with the National Vulnerability Database (NVD)’s problems in the past year. The NVD is the biggest user of data from the CVE Program, but the consequences of losing CVE itself would be far larger.

What is most interesting is that just about everyone seems to be drawing the same two conclusions from this episode:

First, the current US-centric CVE/NVD vulnerability management program needs to be replaced with a truly international program, in which no one country or government plays a predominant role; and

Second, there is no point in even talking about creating a single huge database that includes all vulnerabilities of all types and all software products of all types. While that idea has a lot of intellectual appeal, it requires there be a single vulnerability identifier that can encompass all vulnerabilities, as well as a single software identifier that can encompass all types of software. It will be a long time – if ever – before either of those two dreams is realized.

This is why the ideas that are now being discussed for a new vulnerability management program to replace CVE.org and the NVD all focus on the idea of a federation of existing databases, linked by an intelligent querying infrastructure. This would have been hard to put together even ten years ago, but today – especially given the prevalence of AI – it doesn’t sound hard at all.

The idea of federation is especially appealing when you consider the huge cost of trying to unify all the world’s existing vulnerability databases into one uber database and, even worse, the cost of updating all that information in real time. It’s much better to let the staff members of each individual database continue to update their database using the sources and methods they’ve built up over the years; the federated structure will do its best to make it possible to query across all those databases, and receive as unified a response as is possible in this less-than-ideal world in which we live.

So, where do we go from here – i.e., what path do we need to follow to reach the common goal of a global federated vulnerability database, as well as a global federated vulnerability identification program? I saw two proposals at the end of last week.

The first proposal was articulated by Steve Springett, Chair of the OWASP CycloneDX project and Vice Chair of the OWASP Global Board of Directors (he emphasized in a conversation on Friday that the proposal as from the OWASP Board, not just himself). In the document, Steve emphasizes that the threats facing the software community today are quite different from what they were more than two decades ago, when the CVE Program and the NVD were in their infancy.

Steve especially points to vulnerabilities found in open source software, which was also in its infancy two decades ago but now is found (as components) in at least 90% of all software produced today. Steve also points to other relatively new threats, including cryptographic weaknesses and cybersecurity issues with AI. Steve concludes by saying:

We are calling on governments, industry leaders, researchers, and community experts to contribute their voices, expertise, and resources. Together, we can build an alternative model that complements existing efforts, gradually replacing outdated approaches with a federated, community-driven, and international standard.

The future of cybersecurity identification depends on global collaboration. Let’s build it together.

If you want to be kept informed about this initiative, send an email to cve@owasp.org. I will certainly participate in this.

A second proposal is from Olle Johansson of Sweden, who is also an OWASP member and a member of the OWASP SBOM Forum (as is Steve). Olle is suggesting a somewhat different approach: first describe the organization that will be needed to build and manage “a global platform for vulnerability reporting”, including what will be the roles of the different players in the vulnerability management ecosystem, including national governments, commercial software suppliers, open source projects, and software end users. He points out that only after we have figured all of this out will we be able to start filling in the technical details of the project. Olle is inviting interested parties to edit and comment on his proposal, which is a Google Doc.

A third proposal is from…me. While I’ve been writing for more than a year about the need for what I call the Global Vulnerability Database (GVD) – my most recent post on this subject is here – I’ve always thought of this as a project that’s waiting for its time to come.

Well, it seems its time has come, so here’s my proposal: While I agree that both Steve’s and Olle’s proposals need to be pursued, I also think we need to start talking about technical issues. I don’t mean the minute issues like whether EPSS scores should be included in CVE Records, but the more general issues that there’s been a lot of talk about, but no resolution.

A perfect example of this is version ranges. Everyone agrees this is an important issue, but nobody agrees on what to do about it. It is a fact that a vulnerability almost always affects a range of versions of a product, not just a single version. The biggest problem with version ranges is, as far as I know, there are no vulnerability management tools on the end user side that can ingest a version range in a CVE Record and then, for example, go to an asset management database and mark every version in the range as vulnerable.

Could this problem be solved if we made version range, not an individual version, the default in a software identifier like CPE or purl? That might make it easier to write the code on the end user side.

In any case, this is just one example of issues I would like to see addressed now, even though there are larger issues like Steve’s and Olle’s that need to be addressed as well. Fortunately, I already have a venue where we can have these discussions: the weekly meetings of the OWASP Vulnerability Database Working Group, which is a part of the OWASP SBOM Forum.

The VDWG meets biweekly on Tuesdays at 11AM Eastern Time. If you would like to join our next meeting on April 29 (where we’ll start to discuss the version range question), please email me and I’ll send you the series invitation (if you would like to join the SBOM Forum’s meetings, they’re also biweekly, but at 1PM ET on Fridays. Let me know if you would like that invitation as well). 

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post. 

Wednesday, April 16, 2025

Stay of Execution

Today, CISA – which has been the exclusive funder of the MITRE contract to run the CVE Program – announced that it will renew the contract after all. Thus, it seems we can count on the CVE Program being in place for another year.

However, I don’t need to tell you this is no way to run a railroad. Given the NVD’s problems that started last February and seem to be only getting worse as time goes by - and now given the almost-loss of the CVE Program - it is clear that government-run programs no longer make sense, even though they may have been required in the early days of vulnerability management.

As I mentioned in my post yesterday, the OWASP SBOM Forum, a group that I lead that has been discussing vulnerability database and identification issues since the NVD’s semi-collapse in February 2024, will discuss the way forward on this issue at our regular bi-weekly meeting on Friday at 1PM ET. If you would like to join us, please drop me an email at the address below. 

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com

 

Tuesday, April 15, 2025

CVE circles the drain

When I wrote this post barely more than two weeks ago, it seemed to be like sounds of a faraway battle that might eventually start spilling into your region. You need to pay attention to those sounds, but you’ll get a lot more warnings before the battle starts to impact you directly. In other words, there’s no need to take your family and flee your home now. The army will protect you from the invaders; after all, look at that huge fort they’ve been building for years – it looks like it could withstand a two-year siege!

However, it seems I was wrong. Not only has the battle reached our region, but the fort was overwhelmed before it could even mount a defense – or before the defenders even knew they were in danger. In fact, it’s too late to even think about fleeing. We just have to stand silently as the victorious attackers parade through our streets and stare scornfully at their vanquished foes.

Perhaps I’m letting my metaphors carry me away a little, but this is without doubt a turning point in the vulnerability management timeline. After all, the first 300 or so CVEs were reported in 1999; last year, the total reached around 275,000. Moreover, the rate at which new CVEs are being identified is growing by leaps and bounds every year. As VulnCon showed two weeks ago, the cybersecurity community is increasingly coming to realize that software vulnerabilities are at the root of almost all the serious cybersecurity threats – e.g., ransomware – that we face. Vulnerabilities will never be eliminated, but they can certainly be managed.

Or so we hope.

Are we lost? After all, MITRE researchers came up with the idea for CVE in 1999 and MITRE has run the program to identify and document new CVEs since then – in fact, the CVE Program and the database it ran used to be called MITRE. Today, both the program and the database are called CVE.org. An independent board, consisting of public and private sector representatives, runs the CVE program. Funding for CVE.org now comes entirely from CISA (or at least it did).

It's hard to think that the CVE Program might stop dead in its tracks, yet when a contract is cancelled, that’s usually what happens. But don’t worry, we’ve been given plenty of notice. The contract expires tomorrow, April 16. We have almost 24 hours to continue to enjoy the fact that MITRE still breathes the same air we do!

But what comes on Thursday? I assume no more new CVE Records will be produced, although the existing CVE Records won’t go away. You’ll still be able to learn about many previously identified CVEs (although the serious problem I discussed in this post remains. In fact, the remedy I prescribed, implementing purl as an alternative identifier in the CVE Program, is even more important now).

Also keep in mind that there are other vulnerability types besides CVE, such as GitHub Security Advisories (GHSA) and OSV; they shouldn’t be affected by this at all. On the other hand, the 275,000 vulnerabilities in CVE.org dwarfs both of these databases, as well as the other open source security advisory databases that are mostly specific to particular ecosystems like Python. There’s no disguising the fact that the software vulnerability management universe is going to become very tightly constricted two days from today.

Fortunately, there has been ample warning that the current US government-centric system, including the National Vulnerability Database (NVD) and CVE.org, isn’t sustainable. After all, 14 months ago the NVD fell seriously behind in their self-assigned responsibility to produce CPE names and add them to CVE Records (which are, of course, produced by CVE.org. CVE is part of DHS, while the NVD is part of NIST, which is part of the Department of Commerce. I recommend you reread the beginning of this post, in which I described the two organizations). Not only has the NVD not made up the ground it lost, but it continues to lose more ground almost every day.

More than a year ago, I started talking about a Global Vulnerability Database; I have refined the idea, and I summarized it in this post 11 days ago. As you can see, the GVD won’t be a single database. Instead, it will be a federation of existing vulnerability databases (probably including the NVD and CVE.org).

I’m going to stop now; perhaps I’ll write one or two more posts on this topic this week. However, I’ve already made the decision that this Friday’s meeting of the OWASP SBOM Forum (held every other week at 1PM EDT) will be devoted entirely to this topic. In fact, we’ll probably keep doing that for a while – and we might form a separate project just to start discussing – and eventually implementing – the Global Vulnerability Database.

If you aren’t currently a member of the SBOM Forum and would like to join us this Friday and perhaps afterwards, please drop me an email.

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Friday, April 11, 2025

Databases, all the way down

 

I have been attending VulnCon 2025 remotely this week, although not all the sessions. Even though the first conference was last year, VulnCon has clearly found its niche as the premier gathering place for people interested in or involved with vulnerability management. The conference is well designed and well executed.

The sessions I’ve been attending are those that have to do with software naming in what I call the “CVE ecosystem”, but which most people think of as the National Vulnerability Database (NVD). If you have been reading my recent posts, you know that:

1.      Learning about a software vulnerability isn’t very helpful if you don’t know what products are affected by it; ideally, you want to be able to search on a product name in a vulnerability database and immediately be shown all the vulnerabilities that have recently been identified in that product.  Moreover, since CVE is by far the most widely cited vulnerability type and there are now over 280,000 CVEs in the official list, affected products need to be referred to using a machine-readable software identifier. The only identifier currently supported by CVE.org (the organization funded by DHS that creates and manages CVE Records) is CPE, which stands for Common Platform Enumeration.

2.      When a CVE Numbering Authority (CNA), working for CVE.org, produces a CVE Record to report a new software vulnerability, they do not usually include a CPE name(s) to refer to affected products listed in the text of the record. The reason for this is that the NVD[i] has always wanted to be in control of CPE creation. This didn’t previously cause a big problem, since until last year, the NVD almost always created a CPE for every affected product described in the text of a CVE Record; they did this within a few days of receiving the record from CVE.org.

3.      However, starting on February 12, 2024, the NVD drastically slowed their production of CPE names, for a reason that has never been clearly explained. This has produced an ever-growing backlog of CVE Records without a CPE name. Despite several promises that they would fix the problem by a certain date, the backlog has continued to grow. Today, the backlog stands at well over 40,000 CVE Records (although a well-known vulnerability researcher estimated in the VulnCon chat that the backlog is now 52,000 records). Of course, this is far more than 50% of the total new CVEs identified since February 2024. The NVD no longer even talks about eliminating the backlog for good. My guess is they would be happy just to stop it from growing, but even that doesn’t seem likely now.

4.      Why is it bad that so many CVE Records don’t contain CPE names? It’s bad because a CVE Record without a CPE name is invisible to an automated search of the NVD. If a user of Product ABC wants to learn what vulnerabilities (CVEs) are currently present in that product, they might enter “Product ABC” in the search bar of the NVD. The user should see every CPE name that contains that text string. The user can determine which of those CPEs matches the product they use; then they can search for CVEs that apply to that CPE.

5.      However, if there are no CPE names that contain the text string, the user will receive the message, “There are 0 matching records.” The user will receive this message even if there is a CVE Record that states in its text that Product ABC is affected by the vulnerability, as long as that record doesn’t include Product ABC’s CPE name. The lack of the CPE name in the record means that searching on a CPE name will not inform the user that their product is affected by the vulnerability described in that record.

6.     But there’s a worse problem than not learning about vulnerabilities that affect the product being searched for: The above message is the same one that the user will receive if the product in fact has no identified vulnerabilities. Human nature alone dictates that most users will interpret the message this way. That is, most people will believe the product they use has no vulnerabilities, when in fact it may have a lot of them.

In my opinion, everyone in the CVE ecosystem needs to assume that CPE will never be a reliable identifier, even though nobody is saying that CPE should go away. What’s Plan B? Plan B is purl, which has come from literally nowhere eight years ago to being one of the two or three most widely used software identifiers in the world. However, purl cannot currently be used in CVE Records, so people in the CVE ecosystem currently cannot benefit from using it.

This is why I’m pleased to announce that purl will soon (let’s say in 6-9 months) be available in the CVE ecosystem. I’ve been advocating for purl for more than two years; interest in it has clearly been growing, but the day when it would become an officially accepted part of the CVE ecosystem has always seemed far away. Now, I can say with confidence that CNAs will be able to identify vulnerable products in CVE Records – and end users will be able to search for them – using purl within a year, and perhaps less than that.

Purl was discussed in at least four different sessions at VulnCon, but perhaps the most interesting was a two-hour workshop led by Chris Coffin of MITRE, leader of the CVE Quality Working Group, and Pete Allor, Senior Director of Product Security at Red Hat (both of them are members of the CVE.org Board, which runs the CNA Program within DHS). When the idea for the workshop first came up early in the year – it was primarily the brainchild of Christopher Robinson, aka “CRob”, of the Linux Foundation - the point of the workshop was to have a kind of “face-off” between purl and CPE.

At that time, the question was whether there was enough support for purl in the CVE community for the CVE Board to seriously consider moving forward with it as a second possible software identifier along with CPE. The point of the workshop was to get a “sense of the room” on this subject.

However, I was surprised (and others were, too) by the fact that in the past one or two months, the CVE Program has decided to at least start laying the groundwork for incorporating purl in the CVE Record Format. How did this change come about? While I have no specific knowledge of the reason, I attribute it in large part to the fact that in March it became clear that the NVD was not only not making progress on eliminating their backlog of CVE Records without CPE names, but they were in fact allowing it to grow at a much more rapid pace. Indeed, at the end of March, I was told that the backlog had grown from 55% of CVE Records issued since February 12, 2024 – its size at the end of 2024 – to over 70%.

In other words, searching the NVD for new vulnerabilities applicable to a software product has increasingly become an exercise in futility: You will most likely just get a message saying, “There are 0 matching records.” If you want a lift to your day, you can believe that means your product has zero vulnerabilities and you have nothing to worry about. Or if you want to be realistic, you can say this more likely means that any CVE Record that mentions the product you are searching for in its text does not include a CPE name for the product. If you want to verify this for yourself, you can always read the text of each of the 40,000 new CVE Records added to the NVD since February 12, 2024.

The CVE Program intends to change the CVE Record Format (the format used by CNAs to create CVE Records) to enable CNAs to use purl to identify a vulnerable software product, not just CPE. You might ask why that is such a big deal. After all, if the NVD is struggling to create CPE identifiers, why won’t they also struggle to create purl identifiers?

The answer is that purl identifiers don’t need to be “created”. Today, purl is mainly used to identify open source software distributed in package managers and similar repositories (of course, this includes a huge percentage of open source software products, especially of software components found in SBOMs). A typical purl is: “pkg:pypi/django@1.11.1”. The values of the fields in this purl are:

“pkg” – This field does not currently have a use, but it will in the future. Currently, all purls start with these three letters.

“pypi” – This is called the purl “type”. The package manager is designated in the type. In this case, the package manager (or more correctly, the package index) is PyPI.

 “django” – This is the product name in that package manager.

“1.11.1” – This is the version number (or “version string”) in that package manager.

If you are a CNA creating a new CVE Record that reports a vulnerability found in django v1.11.1 as it exists in PyPI, you can easily create the purl using the values for those four fields. If you’re not sure about one of the fields (e.g., you’re not sure about the spelling of django), you can verify it by checking in PyPI. Similarly, if you’re a user of django and want to learn about current vulnerabilities found in that product/version, you can look at the product itself, or else verify the information in PyPI.

The most important feature of this process is that the purl for django 1.11.1 as found in PyPI will always be globally unique. There are some open source products, like OpenSSL, that exist in multiple package managers, so the name and version string might be the same for all those instances. However, the package manager will be different in each instance. This means every purl is guaranteed to be globally unique.

By contrast, CPE names include at least two fields that are inherently ambiguous: product name and vendor name. Everyone knows that products are renamed regularly, due to M&A as well as various marketing and rebranding campaigns. But even the company name is hardly unambiguous. A consultant who worked at Microsoft once asked people there what company they worked for; she received over 20 different answers. This is compounded by the fact that software identifiers are based on a single spelling of a name, so “Microsoft, Inc” is different from “Microsoft”, which is different from “Microsoft, Inc.” with a period, etc.

The NVD mostly leaves it up to a staff member – usually a contractor – to decide what values to include in the product name and vendor name fields of a CPE name they are creating. It is likely that the only direction they give the contractor is to adhere as closely as possible to existing values in the “CPE Dictionary” (which isn’t a dictionary at all, but simply a list of every CPE ever created). Of course, the product and vendor names vary greatly in the “dictionary”, even when they probably refer to the “same” product or vendor. So, the CPE dictionary is a very week reed to lean on.

In discussions about this problem (which is the infamous software “naming problem”, unless you didn’t realize that), someone always asks, “Why don’t we just build a database of all software products and/or all software vendors? That database can have a canonical name for each product or vendor; every staff member creating a new CPE name will need to adhere as closely as possible to similar names that are located near it in the database.

That idea sounds attractive until you start thinking about it. Then you quickly realize:

1.      Creating, and even more so maintaining, a database like that would be fantastically expensive – many times the cost of maintaining the NVD itself. Remember, the database will include not just big- or medium-sized software companies, but one-person shops that ship a single product. These will have to be tracked all the time for name changes, acquisitions, etc.[ii]

2.      As my friend the consultant found out, there is no agreement on either product or vendor naming among employees of a large software company. Who will oversee decisions regarding canonical names? Since I’m sure there’s no employee at Microsoft that even knows every product they make (let alone can track all the changes in product names), it’s not likely one person, or even one department, can make that decision. The decision will have to be delegated. How will that be done, and what criteria will be provided for the people that make these decisions? Just developing training for these people – which will have to be constantly repeated, of course – will be a monumental task.

3.      I will point out one area of agreement that I’ve found in these discussions: The person who advocates for an approach like this will usually end up saying their department should oversee software naming, because they are the only department with the right perspective to make these decisions. This is expected behavior, since there’s probably no objective way to decide who should oversee software naming.

To summarize the above, trying to definitively fix CPE name creation will usually lead to requiring at least two separate databases: for software and vendor names, respectively. I don’t know of any other way that it would be possible to enforce a policy like, ‘Any software developer whose name begins with the word “Microsoft” will be called “Microsoft Corporation” (and not “Microsoft Corp.”, “Microsoft, Inc.”, etc.).’

How does purl handle the naming problem? The name of an open source product in a package manager is controlled by the operator of the package manager; whatever name they decide on is the correct one for that package manager, although another package manager may decide to give the “same” product a different name. Moreover, it’s likely those two databases will themselves require other databases. After all, if a company like Microsoft is going to designate certain people to oversee naming for certain types of software, there will need to be a database that lists each of those people, as well as the types of products over which they have authority. And that database might itself require another database, etc.

How does purl decide the “correct” name for a software product found in a package manager? It follows a simple rule: the name of the product in the package manager is presumably under the control of the operator of the package manager. That person or organization can be counted on to maintain a “controlled namespace”, in which no product name/version string combination duplicates the name/version of another product in the same package manager.

That way, the name of a product distributed through PyPI or Maven Central will always be the same for anyone who wants to look at the package manager (or even read the “About…” section on the main page of a software product they use); no centralized database lookup is required. Two different people (say, the CNA that creates a CVE Record that includes a purl for Product ABC version 1.2 and the user who wants to search for vulnerabilities in that product/version) should always, barring a mistake, create the same purl.

Problem solved.

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] The National Vulnerability Database is part of NIST, which is part of the Department of Commerce. The CVE.org organization, which used to be called MITRE and is still staffed by contractors from the MITRE Corporation, is funded by the Department of Homeland Security (DHS).

[ii] Steve Springett is advocating an idea called “common lifecycle enumeration”. This can be thought of as an online ledger of changes in names and versions of a software product.