Yesterday, I pointed out that the National Vulnerability Database has not only failed to perform one of its most important jobs – adding CPE names to all new CVE records, a process called “enrichment” – this year, but they put an exclamation point on that record during the last two weeks of the year (ending yesterday 12/30) when they only added 15 of the 1,181 CPEs they should have added during that period. This means that, during those two weeks, they were only operating at a rate of 1%, vs. 45% in the previous 50 weeks (even 45% is abysmal, but it’s 45 times better than 1%, if my math is correct).
Of course, when there’s a shortfall like that, you have to wonder
whether the NVD (which is part of NIST, which is itself part of the Department
of Commerce) has simply given up on creating CPEs. As I explained in the post, a
CVE record – i.e., an announcement of a newly-identified vulnerability, with a
textual description of the software product in which it is found – without a
CPE name is invisible to automated searches using that product name. And since
there have been almost 40,000 new CVEs reported in 2024, manually searching
through the text of each of those records is probably not going to be a lot of
fun.
I honestly don’t think the NVD is completely giving up on
the idea of creating more CPEs, simply because it would be quite hard for
anyone there to justify holding onto their job if they really do that. On the
other hand, I find it hard to believe the NVD will ever make up for the ground
they’ve lost over this year, since they would not only have to go back to
creating CPEs at their old pace, but they would have to at least triple that
rate, for three reasons:
1.
The volume of new CVEs this year is close to
double what it was last year. If that rate continues next year, the NVD will
need to more than double their effort just to keep up.
2.
Since they added less than half the CPEs they were
supposed to add this year, in order to catch up to where they should be, they need
to multiply their doubled rate by a factor of at least two. This means they really
have to triple or quadruple their effort from last year. Do you think they can
do that?
3.
In 2023, there were about 28,000 new CVEs, and I
believe the NVD assigned at least one CPE to each one of those. Yet this year,
they assigned fewer than 18,000 CVEs, which is 10,000 fewer than last year. If
anything, the trend line of CPE assignments is sharply downward, not upward. And
given the current growth rate of CVEs, there could quite easily be over 50,000
new CVEs next year. Even if the NVD were able to get back to the 2023 rate of
28,000 per year, that still leaves their backlog at 22,000 at this time next year
– which happens to be about their current backlog. No matter what it is, this
certainly isn’t progress.
Meanwhile, there’s been another development that only adds
to this problem. Someone pointed out to me today that CISA put up this announcement
on GitHub a month ago: “…as of December 10, 2024, CISA will no longer be adding
CPE strings to the enriched dataset.” You may know that within a couple of
months after the NVD started having its problems in February, CISA gallantly
announced that it was stepping up to help and would be adding CPE names and a
couple other items (CVSS scores and CWEs, I believe) to CVE records – this was
called the “Vulnrichment” program. While the program didn’t produce a huge
number of CPE names, the fact that CISA was doing this was a morale booster for
people who still had some faith in the NVD (a group that no longer included me
by that time).
However, it seems CISA has changed its mind about producing CPEs
(although not the other items). Of course, they’re not providing any explanation
for why they changed their mind. But I can certainly guess that, given
the many broken promises from the NVD this year, CISA is getting a little fed
up with them. So, just as CISA’s announcement of the Vulnrichment program last
spring was a morale booster, their repudiation of that announcement in December
has to be taken as a motion of no confidence, in both the NVD and in CPE
itself. I’ll note that just a few days earlier, CVE.org employees had pointed
out, in their annual CNA Workshop (which I wrote a
post about), that they’re trying to learn more about purl. Indeed!
What’s the solution to this problem? Remember, the NVD was
the leading vulnerability database in the world less than a year ago. What is it
now? You might be inclined to say it’s just a fraction of its former self, but I
want to point out that even that is a wild overestimation. Just because the NVD
has managed to add CPEs to about 45% of its new CVEs this year, that doesn’t
mean it’s providing 45% of the value that it did before that; in fact, it’s
providing much less value.
Consider why you search the NVD, or any other vulnerability
database, for vulnerabilities that apply to a software product your organization
uses. Are you expecting to learn about only 45% of the vulnerabilities that affect
that product? I doubt it. Don’t you really want to learn about every vulnerability
that affects the product? Wouldn’t it be much better if someone told you that
the 55% of vulnerabilities that you won’t find by searching the NVD will be
easy to find if you also search XYZ database? However, nobody can do that,
since there are no machine-readable identifiers to tell anyone what’s in the
approximately 22,000 CVE records that don’t have CPE names (i.e., the backlog).
There is one thing you can do. If you’re concerned mostly about
open source products, you shouldn’t be looking for those in the NVD; in fact,
you should never have been doing that in the first place. The OSV and OSS
Index open source databases both have a much larger collection of open
source vulnerabilities than does the NVD. Moreover, searches in those two
vulnerability databases are much more reliable than they are in the NVD.
Why are these two databases more reliable than the NVD? It’s
because they use a much more reliable identifier than CPE: purl. You can read
why purl is so much more reliable than CPE in this
post, but one difference that should be very obvious by now is the fact
that nobody needs to “create” a purl. In other words, no single government
agency can gum up international vulnerability management activities by stopping
work without any explanation (not that this would ever happen, of course 😊).
An open source product that’s made available in a package
manager already has a purl, since it can be created by anybody who has the spec
and also knows the name of the package manager, as well as the name and version
string for the product in that package manager. Of course, all three of those items
are readily verifiable.
You may be wondering why, given that purl is so much more
reliable than CPE and that CPE is rapidly going down the tubes, we can’t use purl
now to search for vulnerabilities in the NVD? There are two reasons:
1. The CVE Numbering Authorities (CNAs), who report
new vulnerabilities to CVE.org (from where the records are passed on to the NVD
and other databases that are based on CVE identifiers), have until this past
May not been allowed to include any software identifier other than CPE in a CVE
record.[i]
In May, the new CVE v5.1 spec included a field for a purl. However, there has
been probably zero use of that field, mainly because currently there’s no
vulnerability database that can utilize CVE records that include purls. If the
CNAs add purls to CVE records today, the records will be all dressed up with
nowhere to go.
2. Purl currently does not provide a way to identify
commercial software products, but only open source products distributed in package
managers. The only currently available identifier for commercial software
products is CPE. It’s not a good identifier for commercial products (or for
open source products either, of course), but it’s the only one available today.
Faute de mieux, as the French say.
I’m proud to announce that the OWASP SBOM Forum is starting
to work on both of these issues now. You can read how we’re working on the second
issue in this post
that I linked earlier. However, we won’t get very far without more
participation, as well as financial support (which can be in the form of donations
of $1,000 or more to OWASP – a 501(c)(3) nonprofit organization – that are “restricted”
to the SBOM Forum). If you can contribute time or funds, please email me.
Any opinions expressed in this
blog post are strictly mine and are not necessarily shared by any of the
clients of Tom Alrich LLC. If you would like to comment on what you have
read here, I would love to hear from you. Please email me at tom@tomalrich.com.
My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.
No comments:
Post a Comment