Monday, January 27, 2025

How has the NVD’s 2025 start been? So far, abysmal.


As any reader of this blog knows, the National Vulnerability Database (NVD) fell on its face last year. Starting on February 12, the NVD drastically reduced the number of machine-readable “CPE names” it added to CVE Records (which originate in CVE.org, part of DHS). It ended the year having added a CPE name to fewer than 45% of the approximately 39,000 new CVE Records added in 2024. In previous years, the NVD added a CPE to virtually every new CVE Record, usually within a day or two of receiving it.

Why is this important? Because just learning that a newly discovered vulnerability has been found in one or more of the millions of software products used today doesn’t do you a lot of good if you don’t know which product it affects. When the CVE Record is created by a CNA, it includes a textual description of the product.

While that has some value, it won’t be picked up by an automated database search, which won’t display any CVE Record that doesn’t include a CPE name for the product being searched for. In other words, to learn for certain which (if any) vulnerabilities identified in 2024 or so far in 2025 are found in a software product you use, you would need to “manually” read the text in all 42,000-odd CVE Records that were added in 2024 or 2025. This is why vulnerability management requires automation. Since last February, truly automated vulnerability management has not been possible, at least not in the NVD.

But 2024 is (thankfully) behind us; how is the NVD doing in 2025? During the OWASP SBOM Forum’s meeting last Friday, Bruce Lowenthal, Senior Director of Product Security of Oracle, updated the group on this question by putting this information in the chat:

1.      “Only 27% of CVEs published in December 2024 have CPE assignments (in the NVD).”

2.      “Only 8% of CVEs published in January have CPE assignments in NVD.”

The first item isn’t terribly surprising, since we already knew the NVD almost completely stopped assigning CPE names for a couple of weeks in December. However, the second item was quite disappointing. There was a lot of hope (although not from me) that the NVD would not only start assigning a CPE to every CVE Record again soon, but also start working down the backlog. Instead, after adding over 2,000 to the backlog of CVE records without a CPE assigned in December, the NVD has surpassed that record by adding over 2,700 to the backlog this month. I suppose that’s progress, but it’s certainly not in the right direction.

In summary, in order to dig itself out of the hole it’s in, the NVD needs to

1.     Stop adding to the CPE backlog, by assigning a CPE name to each of the over 3,000 monthly new CVE Records that are likely to be created this year, and

2.      Start reducing the backlog by assigning additional CPEs at a rate that will reduce the backlog to zero by the end of 2025. Since I believe the backlog is around 23,000 as of today and since there are 11 months left in the year, this means the NVD will need to assign an additional 2,000 CPEs every month this year, for a total of 5,000 every month.[i] If the NVD does this, the backlog will be zero – and no longer growing – at the end of 2025. Of course, at the moment I’d assign the statement that this could actually happen to the realm of fantasy.

While the CPE identifier has always had a lot of problems, the problem I just described – that more than half of CVE records since last February contain no CPE name at all - is far worse than the others. A CVE Record that contains no CPE name is completely invisible to an automated search. This raises the question whether it’s worthwhile to perform any vulnerability search of the NVD today, since doing so may leave the user with a false sense of security.

Is there a solution to this problem? Yes, there is. What if I told you there is a software identifier that came from literally nowhere less than ten years ago to being today by far the dominant identifier in the open source world? A software identifier that’s used over 20 million times every day, by just one tool, to look up an open source vulnerability in the OSS Index open source vulnerability database?

Moreover, this identifier – called purl - doesn’t need to be “created” by any third party. As long as a purl is included in a CVE record, a user should always be able to write down a purl that will match the one in the record, utilizing a few pieces of information they should already have at hand or can readily look up (for open source software products, these are the package manager name, the name of the product in the package manager, and the version number in the package manager).

In other words, purl can be at least a co-equal identifier to CPE in the NVD and other vulnerability databases that are based on CVE. However, there are two important problems that need to be addressed before this can happen:

1.      CVE Records need to include purls, and the CVE Numbering Authorities (CNAs) need to see the value of including them in the records.

2.      Purl today doesn’t identify commercial software; it just identifies open source software. Fortunately, there is a workable proposal for fixing that problem.

I’m pleased to report that work on fixing the first problem will start soon, while work on the second problem will almost certainly start within six months. If you would like to get involved in one or both of these efforts, please let me know. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also email me If you would like to participate in the OWASP SBOM Forum or donate to it (through a directed donation to OWASP, a 501(c)(3) nonprofit organization).

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] Even this estimate is too low, since it doesn’t allow for any growth in the number of new CVE records in 2025 vs. 2024. That number increased by 11,000 in 2024 vs. 2023. It might not increase by that amount this year, but it would be naïve to believe it won’t increase at all this year. 

Wednesday, January 15, 2025

Should you be worried or happy when you find no vulnerabilities in your software?


One of the unfortunate byproducts of vulnerability management compliance requirements is that they tend to reward negative findings. That is, if you search a vulnerability database for products that you use and find that some product searches don’t yield any vulnerabilities at all, you might take this as evidence that the products are very secure and you must be compliant.

However, often that’s far from the truth. The negative finding can mean:

1.      The person who created the identifier for the product in the vulnerability database made a mistake or didn’t follow the specification. Because you searched using the correct identifier, the search failed.

2.      The product you’re searching for has been acquired by a different supplier, who enters new vulnerabilities under their name, not that of the previous supplier (from which you acquired the product). That search fails, too.

3.      The supplier name reported was for example “Microsoft, Inc.”, but it is recorded in the identifier as “Microsoft Inc.” Again, the search fails.

4.      A product has been entered in the vulnerability database using multiple names, e.g. “OpenSSL” vs “OpenSSL_Framework”. However, a vulnerability is seldom reported using multiple names, since it’s unlikely that the CVE Numbering Authority (CNA) that reported the vulnerability knew there were multiple names available. Thus, due to pure chance, one name might have no vulnerabilities reported against it, while the other may have a lot of vulnerabilities, even though they’re in fact the same product. If you happen to enter the name for the version with no vulnerabilities reported, your search will again fail.

5.      Even though the identifier in the database is correct, you fat-fingered the identifier in the search bar, so the search failed.

6.      The supplier of the software product has never reported a vulnerability for it. Instead of the negative finding being good news, it means the supplier cares very little for the security of their products or their customers. The great majority of software vulnerabilities are reported by the developer of the software or the manufacturer of the device. However, it’s certain that reported CVEs (about 260,000 have been reported since 1999) are a tiny fraction of all software vulnerabilities. Lack of vulnerability reporting is the biggest obstacle in the way of securing the software supply chain.

In the National Vulnerability Database (NVD), the above problems are compounded by the error message the user receives almost every time a search is unsuccessful: “There are 0 matching records”. Since this message also appears when a product truly doesn’t have any vulnerabilities reported against it, in all the above scenarios you might reasonably assume the product is vulnerability-free. In fact, human nature dictates that most of us will make that assumption. Of course, in most cases it’s probably not true.

What is the solution to these problems? I don’t know of any solution to all of them, but I can point out that the first four problems will not appear if the identifier used in the vulnerability database is purl (the product identifier used in almost all open source software databases) as opposed to CPE (the identifier used by the NVD and databases that are based on the NVD).

This is why the CVE Program (run by cve.org) is now looking at including purl as another identifier in what I call the “CVE ecosystem”. This includes all vulnerability databases that utilize CVE as the vulnerability identifier (there are many other vulnerability identifiers, mostly for open source software products. However, CVE is by far the major vulnerability identifier worldwide).

Of course, the most widely used vulnerability database in the CVE ecosystem is the NVD. When the CVE Program adopts purl as an alternative identifier, will purl suddenly be usable for searches in the NVD? No. To support purl, a vulnerability database that currently just supports CPE will need to make several changes. Given the problems that the NVD has been experiencing over the past 11 months, it isn’t likely they will be able to suddenly devote resources to making them.

However, other databases will be able to display CVEs that apply to a software product when the user enters a purl[i]. This means that at least some of the CVE Records published by CVE.org will be accessible to users and developers of open source software products.

It will be at least a couple of years before purl is fully supported in the CVE ecosystem. That might seem to be a long time, if it weren’t for the fact that six months ago I would have told you it’s unlikely that purl will be supported by CVE in this decade. Things are starting to move in the right direction.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] There are two large vulnerability databases that display CVEs that apply to an open source software product/version when the user enters the purl for that product/version. However, these databases don’t include the CVE Records published by the CVE Program, since those records currently don’t include purls. When that changes, those two databases will be able to accept new CVE Records, thus making those records searchable using purl.

Sunday, January 12, 2025

Here’s how we can chop a couple of years off the wait for the “Cloud CIP” standards

 

I have posted recently on the need to rewrite two NERC CIP requirements: CIP-007 Requirement R2 (patch management) and CIP-010 Requirement R1 (configuration management). The primary reason that both requirements need to be rewritten is that they are by far the most prescriptive CIP requirements. In fact, since CIP version 5 (when both these requirements were substantially revised) came into effect in 2016, I have heard that complying with just these two requirements accounts for a substantial percentage of all NERC compliance costs, not just NERC CIP compliance costs.

However, the second reason why these two requirements need to be rewritten is that they are currently the two biggest barriers to use of the cloud by NERC entities with medium or high impact BES environments. The main reason for this is that the two requirements apply on the level of individual BES Cyber Assets, even though they’re written to apply to BES Cyber Systems (BCS). This means that a cloud service provider would have to produce documentation for the NERC entity that showed the CSP had taken every required step in CIP-007 R2 and CIP-010 R1 for every device on which any part of the BCS resided during the audit period.

One of the main reasons why use of the cloud is so inexpensive is that systems (i.e., the software and data in systems) can be moved from server to server and datacenter to datacenter whenever it’s advantageous to do so. It would be hugely expensive if a CSP were required to provide that information, and it’s doubtful that any CSP would even entertain the idea of doing that. None of the other CIP requirements require providing documentation at anywhere near that level of detail.

Fortunately, both the prescriptiveness problem and the cloud documentation problem can be cured with the same medicine: rewriting CIP-007 R2 and CIP-010 R1 to make them “objectives-based” (that is NERC’s term, although mine is “risk-based”. They mean effectively the same thing). When will that happen?

Last summer, a new NERC Standards Drafting Team started working on what will undoubtedly be a huge multi-year project to revise (and/or add to) the existing NERC CIP standards to make them “cloud-friendly”. They haven’t worked out their agenda yet, but I recently estimated that the new and/or revised standards will be fully approved and enforced around 2031. This is based on the experience with CIP version 5, which took almost that long and which in some ways was easier to draft than “cloud CIP” will be.

However, one thing is certain about the SDT’s agenda: it will include rewriting CIP-007 R2 and CIP-010 R1. Given how controversial both requirements are, and the fact that CIP-007 R2 needs to be rewritten as a vulnerability management, not a patch management, requirement, I think just rewriting and balloting those two requirements will take 1 ½ to 2 years. While this work will undoubtedly require some coordination with the “Risk Management for Third-Party Cloud Services” drafting team, this is something that NERC drafting teams do all the time.

So here’s my idea: Why not create a new Standards Authorization Request (SAR) that just requires rewriting the two requirements? This would take CIP-007 R2 and CIP-010 R1 completely off the cloud SDT’s plate, meaning they might be able to finish their work in five years, not seven. And it would allow the two revised requirements to be drafted by a fresh team that’s excited about being able to fix the two biggest “problem children” among the NERC CIP requirements, rather than a team that’s midway through a 7-year slog and wondering if perhaps long-distance truck driving would have been a better career choice.

While I would technically be allowed to draft that SAR, I don’t have the time to do it – and more importantly, a SAR has much better chance of approval if it’s prepared by one or two NERC entities (with perhaps a vendor also participating). However, if a NERC entity wants to take the lead on this, I’d be pleased to help draft it.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is available! For context, see this post. 

Wednesday, January 1, 2025

Why do we need to replace NERC CIP-010 R1?

 

I have written multiple posts recently (including this one and this one) on why CIP-007 R2, the NERC CIP patch management requirement, needs to be rewritten as a risk-based (or “objectives-based”) vulnerability management requirement. My two main reasons for saying this are:

1.      The inordinately high cost of documenting compliance with CIP-007 R2 causes NERC entities with high and medium impact BES environments to invest a lot of resources in activities that don’t yield much of an increase in security. The same can be said for CIP-010 R1.

2.      CIP-007 R2 and CIP-010 R1 are very likely the two largest obstacles to implementing medium and high impact BES Cyber Systems (BCS), Electronic Access Control or Monitoring Systems (EACMS) and Physical Access Control Systems (PACS) in the cloud today. This is because a cloud service provider may be required to provide compliance evidence for these two requirements on an individual device (both physical and virtual device) level; that will be quite difficult for the CSP to do.[i]

I’ve stated many times since CIP version 5 came into effect in 2016 that CIP-010 R1 and CIP-007 R2 are the two biggest examples of the high cost of complying with prescriptive CIP requirements. One NERC entity with high impact Control Centers told me that, of all the documentation they must generate for compliance with all NERC requirements (not just NERC CIP requirements), half of that is due to CIP-010 R1 and CIP-007 R2. While it’s indisputable that security measures need to be documented, a number of NERC entities have also told me they think half of the CIP compliance documentation they produce doesn’t improve security at all.

Cybersecurity is a risk management exercise. This means the objective of any cybersecurity requirement should be to reduce risk. My main problem with CIP-007 R2 is that its objective, patch management, doesn’t address a risk at all. Instead, it’s a mitigation for an important cyber risk that isn’t directly addressed in CIP today: the risk posed by unpatched software vulnerabilities in BCS, EACMS, Protected Cyber Assets (PCA) and PACS. Since there are other mitigations for this risk besides patch management, and because in many cases the risk posed by an unpatched vulnerability is so low that it’s a waste of time for a user organization to apply the patch for it, CIP-007 R2 needs to be rewritten as a vulnerability management requirement.

How should CIP-010 R1, the CIP configuration management requirement, be rewritten as a risk-based requirement? As with patch management, configuration management is a mitigation for a cybersecurity risk, not a risk in itself. What is the risk that configuration management mitigates?

Think about what happens if you misconfigure a system – for example, you disable authentication while working on the system and forget to re-enable it. This leaves the system vulnerable to exploit in a similar way to how a software vulnerability leaves the system vulnerable to exploit. In other words, the purpose of configuration management is to prevent vulnerabilities that are due to accidental misconfiguration, just as the purpose of patch management is to prevent vulnerabilities that are due to software flaws. Does this mean that CIP-007 R2 and CIP-010 R1 address the same problem: risk due to unmitigated vulnerabilities?

I don’t think it does, and here’s why: There is usually little or nothing that most organizations can do to prevent vulnerabilities from being present in software they didn’t develop. However, there is a lot that an organization can do to prevent configuration vulnerabilities. In fact, that’s the purpose of CIP-010 R1: to prevent a configuration error from leaving the system in a vulnerable state.

Given that’s the case, you might wonder why I’m making such a big point about CIP-010 R1. Since its purpose is to prevent misconfigurations from occurring in the first place, why does the requirement need to be rewritten? Obviously, if there are no misconfigurations, there’s no risk from misconfiguration, right?

Unfortunately, no. No true risk can ever be completely “prevented”. I can prevent the risk of food poisoning by not eating, but I don’t know many people who would think that’s a good idea. Similarly, the way to prevent the risk of computer misconfiguration is not to use computers at all, but that’s neither possible nor desirable for most of us.

Any risk management exercise needs to begin by acknowledging there is no way to reduce the risk to zero. Even if a NERC entity follows every letter of CIP-010 R1, it’s always possible that someone on the staff will have a bad day and forget to do something really basic, leaving the NERC entity exposed to both cybersecurity and compliance risk.

Of course, you could substantially reduce the risk of a bad day making someone forgetful by hiring two staff members for every position and making sure that each person is always watching the other. If that doesn’t work, you could hire three staff members for each position – although even that isn’t guaranteed to work…

You probably get the idea: If you’re willing and able to throw more resources into mitigating a risk, you can probably reduce it to as close to zero as you want, but at an increasingly higher cost per amount of risk mitigated. At a certain point, your boss will tell you the company has better uses for its resources than trying to reduce this particular risk to .00000001%. In fact, your boss might accompany that message with a pink slip.

This is why it’s so important that all the NERC CIP requirements be made risk- (or objectives-) based: If an overzealous auditor wants to require a NERC entity to ignore all other options for use of its time and money and focus the whole organization on mitigating one particular risk (which might be the risk of computer misconfiguration), the entity can argue that the auditor is being unreasonable.

However, there are a small number of prescriptive CIP requirements that mandate certain actions come hell or high water. For example, CIP-007 R2 requires you to apply or develop a mitigation plan for every applicable security patch that’s been released for a software product used in the ESP within the last 70 days. Even if the vulnerability addressed by the patch has never been exploited in the wild since it was discovered ten years ago – which means the vulnerability addressed by the patch poses very little risk to the organization – it still is in scope for this requirement.

Fortunately, besides CIP-007 R2 and CIP-010 R1, I think all other CIP requirements are risk-based, even when they don’t mention risk. For example, the many CIP requirements that point to a plan or process as their objective are risk-based, since developing and following a plan always requires identifying and managing risks.

I’m proposing that CIP-007 R2 and CIP-010 R1 both be rewritten as risk-based requirements. You might ask why I want to change them now, since the Risk-Management-for-Third-Party-Cloud-Services standards drafting team may well decide to change them on their own. I agree the SDT will feel compelled to revise these two standards themselves, but I also believe it will be 2031 (or even later) before all of the new and/or revised CIP standards they develop will come into effect.

CIP-007 R2 and CIP-010 R1 have been a burden on NERC entities for years, even without any consideration of using them in the cloud. Removing them from the Cloud Drafting Team’s agenda and creating a separate drafting team for them could easily cut 2-2 ½ years off the Cloud SDT’s work, as well as make it possible for NERC entities to comply with the two new risk-based standards at least 2-3 years before they would otherwise be able to. Even NERC entities that don’t plan to put BES Cyber Systems in the cloud anytime soon will benefit from this.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is available! For context, see this post.


[i] I used to say it’s impossible for CSPs to provide this evidence. I now say it’s difficult, but not necessarily impossible.