Monday, May 13, 2024

How do we replace the NVD? Should we replace it?

At the OWASP SBOM Forum’s weekly meeting on Friday, we heard what the NVD’s big problem is: It no longer has funding. It turned out that the collapse on February 12 really was due to a funding shortfall – a massive shortfall, although perhaps not all the way to zero. Some people had suggested that the problem was cutbacks by NIST. I I pooh-poohed that idea, since budgets are set before the beginning of the government’s fiscal year, which is October 1.

However, it turns out I didn’t know where the lion’s share of the NVD’s funding comes from. And guess what? It doesn’t come from NIST. It was that other source that abruptly cut off the NVD’s funding in February. At that point, the NVD probably had to release some staff members to join other projects at NIST (as I’m sure you know, NIST does lots of work in cybersecurity. There probably are always openings on other projects. I doubt anyone from the NVD was put on the street, unless they decided to quit). The NVD kept “enriching” a small number of CVEs, but at the meeting on Friday, someone (I think Andrey Lukashenkov of Vulners) said they hadn’t done any enriching for ten days.

This means they (i.e., whoever is still working for the NVD) are now just trying to maintain the database they have. This means that, if you search the NVD for vulnerabilities applicable to a product you use, don’t expect to find any recent vulnerabilities. Of course, those are the ones that most users are concerned about. The NVD without data on recent vulnerabilities is about as useful as a car without a steering wheel.

Why did that other source cut off their funding? I don’t know, and finding out isn’t a big priority for me. What is a priority is deciding where the software security community goes from here, as well as what options are available to organizations concerned about identifying vulnerabilities in the software they use.

As luck would have it, I’d realized a couple of months ago, after the NVD’s problems became apparent, that there were too many individual threads that needed to be pulled for there to be succinct answers to these questions; therefore, there needed to be a group effort to pull the threads and put together a coherent picture. This is why I formed the Vulnerability Database Working Group of the OWASP SBOM Forum. I summarized my idea of the group’s agenda in this post. I still think it’s a good agenda, although the group will decide what we want to discuss and what document(s) we want to produce – and those ideas are likely to change as we go along.

However, there has been one significant change since I wrote that post. Then, it was unclear what (if anything) CVE.org would do to step up to replace the NVD. It’s now clear that they are doing a lot. There have been two important changes:

The first change is that CVE Numbering Authorities (aka CNAs. They include groups like GitHub and the Linux kernel, as well as software developers like Red Hat and Microsoft, who report vulnerabilities in their own products as well as other products that are in their scope. The CNAs report to CVE.org) will now be encouraged to include in their CVE reports the CPE name for the product, the CVSS score and the CWEs. As the announcement by CVE.org points out, “This means the authoritative source (within their CNA scope) of vulnerability information — those closest to the products themselves — can accurately report enriched data to CVE directly and contribute more substantially to the vulnerability management process.” It never made sense for the NVD to create these items – or override what the CNA had created, which often happened.

The other significant change is that CVE.org now supports what was previously called v5.1 of the CVE JSON specification, but is now called the “CVE Record Format v5.1”. In early 2022, Steve Springett and Tony Turner of the OWASP SBOM Forum submitted a pull request to CVE.org to get purl identifiers included in the CVE spec. We missed the v5.0 spec, but made it into v5.1. This means that, with some training, CNAs will now be able to include purls in their CVE reports, although they may still have to include CPEs, and software users (as well as developers) will bee able to find vulnerabilities for open source products using purls (this will also be a boon to open source software vulnerability databases like OSS Index and OSV).

Our reasons for advocating purl were described in this white paper, which we published in September 2022 (see especially pages 4-6, where we describe some of the many problems with CPE). In that paper, we also described a way to handle proprietary software in purl (since purl currently applies exclusively to open source software), based on use of SWID tags – but the mechanism for making the SWID tags known to software users was left open. This isn’t a technical problem but an organizational one, since it might require creating a spec for software developers to follow when they want to advertise SWID tags for their products. It would be nice to see that done sometime, but I don’t have the time to lead it now.

The third topic in that paper was an identifier for intelligent devices, since CPE is deficient in that area as well. We suggested the GTIN and GMN identifiers, now used widely in global trade. I didn’t think we were even going to push those identifiers at the time, yet it seems that Steve and Tony must have included them in the pull request, because CVE says they’re supported in 5.1 as well! If CNAs start including these in CVE reports for devices, that might significantly change the whole “discipline” of vulnerability management for devices, which is frankly not making much progress now. This is because only a very small percentage of device makers report vulnerabilities in their devices to CVE.org today.

I’m encouraged that CVE.org is picking up their game. However, an organization that currently uses the NVD exclusively for vulnerability data would not be well advised to simply switch their total dependence over to CVE.org, since the “front end” capabilities needed for those organizations to make use of the data are less robust in CVE than they are (or were) in the NVD.

We will be discussing these and similar issues in the OWASP Vulnerability Database Working Group in our meetings tomorrow and every two weeks thereafter. If you would like to join the group, drop me an email.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Saturday, May 11, 2024

NERC CIP and the cloud: The Regions clearly state the problem

I and other consultants have been pointing out the serious problems being caused by the fact that the current CIP standards effectively prohibit use of the cloud for medium and high impact systems. This is not because the requirements explicitly forbid cloud use – indeed, the current requirements say nothing at all about the cloud - but because cloud service providers would never be able to provide the evidence required for a NERC entity to prove compliance with prescriptive CIP requirements like CIP-007 R2 and CIP-010 R1.

Now, two highly respected staff members from the Regional Entities, Lew Folkerth of RF and Chris Holmquest of SERC, have written an excellent document that states the problem very clearly. It was published in the newsletters of both Regions: SERC (https://serc1.org/docs/default-source/outreach/communications/serc-transmission-newsletter/2024-newsletters/2024-05-newsletter/n-the-emerging-risk-of-not-using-cloud-services_2024-05.pdf?sfvrsn=e6923606_2 ) and RF (https://www.rfirst.org/resource-center/the-emerging-risk-of-not-using-cloud-services/ ).

I am reproducing the article below with permission from both Regions. I have highlighted two paragraphs I think are especially important. The one problem I have with this document is that it doesn’t emphasize the most serious issue here: It will likely take at a minimum 5-6 years for the new standard(s) to be developed and approved, along with changes to the NERC Rules of Procedure that will probably be required, and for all of this to take effect.

Yet, given the accelerating pace at which software and service providers are moving to the cloud, it is likely there will be negative impacts to grid security and reliability within a couple of years. There will probably need to be a shortcut available soon that will allow some NERC entities to continue using software and services (especially cybersecurity services) they would otherwise have to stop using.  We don’t want the NERC CIP standards, which were implemented to improve grid reliability and security (and have done so, to be sure) to become the biggest impediment to that improvement.

If you have questions for either Region, you can get them answered here: SERC (https://www.serc1.org/contact-us) and RF (https://www.rfirst.org/contact-us/). Also note that the project page for the new NERC “Risk Management for Third Party Cloud Services” standards development project – which will begin operations in Q3 – has been created. You should bookmark this and check it regularly.  

 

THE EMERGING RISK OF NOT USING CLOUD SERVICES

By: Chris Holmquest, Senior Reliability and Security Advisor, SERC Lew Folkerth, Principal Reliability Consultant, RF

In the ERO, we are seeing forces that foretell an inevitable move to cloud-based services for many operational technology (OT) applications and services. Cloud technology has been advancing for many years, and software and service vendors are now migrating their products to take advantage of this new technology. Even when our industry addresses the security concerns of this migration, there will still be compliance concerns. We will share the efforts underway to identify the risks to reliability, security, and compliance that our industry must address before we can move forward in this area.

Security challenges for on-premises OT systems

Vendors of security monitoring, asset management, work management, and other essential services are moving toward cloud-based services at a very rapid pace with their applications and infrastructure. This brings a new risk to light: soon we may be seeing end-of-life notices for on-premises systems, which translates to lessened or non-existing support, including security patches. Some members of our industry have already observed that new and important features are being implemented only in the cloud-based offerings.

Entities are looking at the potential benefits that cloud-based software and services can bring. As entities in our industry are challenged to acquire sufficient resources to manage their reliability, security, and compliance risks, cloud services can offer attractive solutions to manage these risks while lowering costs in capital investment and support.

Moving to the cloud presents risks as well, not the least of which is being confident that your systems and data are secure. Even when you are confident in the security of your systems and data, you will still face compliance risks. 

Compliance challenges for OT cloud services

The use of cloud services will not be possible for high and medium impact BES Cyber Systems under the present CIP Standards because compliance risk will be increased beyond an acceptable level, except for BES Cyber System Information in the cloud. New Reliability Standards will be required, and those standards will need to be risk-based. There are too many variables in cloud environments to be able to write prescriptive standards for these cases.

Your compliance processes will need to be very mature and integrated with operational processes and procedures. Internal controls will become even more important.

Auditing processes will need to be adapted to cloud environments to determine the type, quality and quantity of evidence that will be needed to provide reasonable assurance of compliance. 

The path forward

There are efforts underway to help with this complex dilemma. We are looking at these various issues and have formed an ad-hoc team of Electric Reliability Organization and Federal Energy Regulatory Commission staff, cloud service provider vendors, industry consultants, training experts, and electric industry security, compliance, and management personnel. This team is providing ad-hoc support to other existing groups working to advance the use of cloud technologies. So far, these efforts include work on a series of industry webinars to address issues with using cloud in our OT and CIP environments. Awareness of cloud technologies for our systems is crucially important, and these webinars will be designed for a broad audience. Efforts also include a field test of a cloud-based system and investigating third-party assessments, which may be essential to accommodate the CIP Standards with a cloud system.

There is a formal NERC subcommittee under the Reliability and Security Technical Committee called the Security Integration and Technology Enablement Subcommittee (SITES). Registered entity staff and vendors are members of this group, and they have published a white paper called “BES Operations in the Cloud” that we recommend.

A SITES sub-team, New Technology Enablement (NTE), is in the process of creating a series of white papers to help move the standards development effort from a stance that follows technology developments after the fact, to a leading process where standards development is part of early adoption of applicable technologies. The goal of NTE is to enable use of the best available tools and techniques in our most critical systems. Their first effort will be a paper titled “New Technology Enablement and Field Testing.” 

Getting involved

The ability to use cloud services to reduce security risk and to improve reliability and resilience is important to the future of our industry.

We suggest that you read the SITES white paper and consider volunteering to participate in the SITES and/or NTE groups if you would like to contribute.

SANS, the well-known security training organization, will be hosting the series of webinars mentioned above. Please watch for the announcements for these webinars. Also, there is a recorded SANS Summit Panel discussion (link below) of this risk and possible directions forward.

A new standards development project, Risk Management for Third-Party Cloud Services, has been established (see link below). This project is scheduled to become active in the third quarter of 2024.

Please stay abreast of these developments and consider how your knowledge and industry experience can contribute to these efforts. 

References

• Security Integration and Technology Enablement Subcommittee (SITES)

• White paper: BES Operations in the Cloud

• SANS Summit Panel – We Hear You Cloud and Clear

• 2023-09 Project – Risk Management for Third-Party Cloud Services

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Thursday, May 9, 2024

Is it time to seriously discuss the Global Vulnerability Database?


Last August, I wrote a post called “A Global Vulnerability Database”. In the post, I wrote about a disappointing experience the SBOM Forum (now the OWASP SBOM Forum) had recently had with Tanya Brewer of NIST, the leader of the NVD. We had reached out to her in early 2023 and offered to help the NVD, especially regarding implementing the recommendations in the white paper we published in 2022 on how to fix the NVD’s problems with software naming.

We first met with Tanya in April 2023. She said then that she’d like to put together a “public/private partnership” so that we and one or two private corporations – who had also offered to help them – would have a structured way to do that. When she talked to us again in June, she had worked out with NIST’s lawyers the idea for a “Consortium” which organizations could join to help the NVD – although it seemed the main help she wanted was for companies to provide her warm bodies, which were attached to minds that knew something about coding.

Those bodies would need to be ensconced at the NVD for at least six months. The first couple of months would be spent learning some obscure programming language I’d never heard of (which says something, since I wrote my first couple of programs on punch cards). Offhand, suggesting that young, ambitious coders spend 6 months immersed in working with a decades-old language, probably used nowhere but the NVD, didn’t seem to me to be a wise career suggestion. However, we all appreciated her enthusiasm.

She described in detail her plans to announce the Consortium in June, have it published in the Federal Register in August, and get it up and running by the end of the year. Those plans seemed very unrealistic but again, we appreciated her spirit.

However, that spirit seemed to dissipate quickly; in fact, the next we heard about the Consortium was in NIST’s announcements about the NVD in late February, when NIST pointed to the Consortium as the cavalry that was at that minute galloping through the sagebrush on their way to rescue the NVD from its problems. However, it seems the cavalry lost its way, because – despite Tanya’s promise at VulnCon in late March that the Consortium would be announced in the Federal Register imminently – nothing more has appeared on that front (I think the cavalry got enticed by Las Vegas when they galloped through it. They haven’t been heard from since).

By the time I wrote the post last August, I was already disappointed that we hadn’t heard back at all from Tanya, and I began to wonder if just maybe the NVD wasn’t quite the pillar of the cybersecurity community that we believed it was.

This was why I started thinking more about an idea we’d talked about a few times in the SBOM Forum: the need for a vulnerability database that wouldn’t be subject to the vagaries of government funding, in an era when a bill to name a post office after George Washington might prove too controversial to get through Congress and the US has to decide every couple of years whether it wants to pay its debts at all.

In our discussions, we had agreed:

1.      This should be designed as a global database from the start. Governments and private sector organizations worldwide would be welcome to contribute to it, but we didn’t want the database to be dependent on one primary source of funding, especially a government one. Been there, done that, got the T-shirt.

2.      The database should follow the naming conventions we suggested in our 2022 white paper, or something close to them. Of course, there are still a lot of details to be filled out regarding those suggestions.

3.      Of course, putting all this data in one database, and maintaining it over time, would require a huge effort and a lot of support (including financial support). However, given the fact that so many organizations worldwide have been using the NVD for free for many years, and had an overall good experience, without once being asked to contribute a dime, I was – and remain – sure that the support will be there when we ask for it. My guess is there’s a lot of pent-up “contributors’ demand” that needs to be satisfied.

4.      The database would need to be developed and implemented by a separate non-profit corporation, but once it was up and running smoothly, it could be turned over to an international organization like IANA, which assigns IP addresses and runs DNS (how many hundreds or thousands of times do you use IANA’s services every day, without once thinking about who enables all of that?).

While I didn’t write about the GVD again until November of last year, I kept thinking about it. One thing I realized was that the idea of a “harmonized” database – with a single type of identifier for software and devices, as well as a single identifier for vulnerabilities (many people don’t realize there is any other vulnerability identifier besides CVE, but there are many of them, albeit most not widely used) was not only very difficult to implement but also completely misses the point: There are diverse identifiers because their subject matter (in this case, software products and intelligent devices, as well as vulnerabilities) is diverse.

A good example of this is purl vs. CPE as an identifier for open source software (currently, purl only works with OSS, although it has literally taken over the OSS world). A single OSS product and version may be available in different package managers or code repositories, sometimes with slight differences in the underlying code. The purl for each package manager and repository will be different from the others (since the mandatory purl “Type” field is based on the package manager). Yet, there will be only one CPE in the NVD and it won’t refer to the package manager, since there’s no CPE field for that.

Thus, there can never be a one-to-one “mapping” of CPEs to purls (or vice versa), meaning harmonization of OSS identifiers in a single database would be impossible. Similar considerations make it impossible to have a single identifier for vulnerabilities, with all other identifiers mapped to it.

But I also knew that harmonization of identifiers – while probably being an absolute requirement of database technology in say the 1970s – was hardly necessary today, since the last time I checked it was 2024. Today, a single database can very easily have multiple identifiers for single products, without breaking a sweat.

But that led to another idea: Why do we need a single database at all? After all, since the different identifiers are often used in specialized databases dedicated to particular types of vulnerabilities or software, and since these databases are often compiled by people with specialized knowledge of the contents, it would be a shame to try to homogenize those data types and especially those people. Instead of the richness and diversity available today, we would get a bland product produced by bland people, in which a lot of the detail had been flushed away in the name of harmonization.

However, even though we do have a rich variety of vulnerability databases available today, it’s hard for most people to understand how they can be used together (the SBOM Forum’s Vulnerability Database Project will be cataloging them and providing suggestions on how they can be used together, since in the near term there will be no single database available with the scope of the NVD; we may start discussing the GVD in a few months. If you’re interested in joining that group – which meets bi-weekly on Tuesdays at 11AM ET – drop me an email).

That’s why there needs to be a single intelligent front end that directs queries to the appropriate database(s). That front end will be called the GVD, but – and please keep this a close secret between you and me – it will in fact simply be like the Wizard of Oz: a single man standing behind the curtain, turning knobs and pulling levers to give the impression of a single, massively efficient database. Pretty slick, no?

So, I wrote this post in November. I’m pleased to say that it was met with massive, universal…indifference. Of course, that happens to a lot of my posts. I didn’t expect anything different this time, since there was nothing on the horizon that might make people think we needed to start thinking about a different long-term database solution…

…Until February 12, when the NVD seems to have been swallowed by a black hole – and three months later, we still don’t even have a coherent explanation for the problem, let alone the beginning of a fix. Early this week, Brian Martin posed a long, thoughtful analysis of my November post on LinkedIn. He raised some very good questions. I promised to answer them in a few days, but I then decided I’d like to put up a post that explains how I and other members of the OWASP SBOM Forum came up with the idea for the GVD.

Here’s that post. I’ll put up a new post with the answers to Brian’s questions by early next week. Please contain your excitement until then.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Monday, May 6, 2024

It seems I was wrong about medical devices

In January, I pointed out that at least eight of the top ten medical device manufacturers worldwide have either never reported a vulnerability to CVE.org or have only reported a small number of them. I said this not as direct criticism of medical device makers, but as illustration of the fact that intelligent device makers of all types are mostly not reporting vulnerabilities at all. After all, if medical device manufacturers – which are subject to the most stringent cybersecurity regulations worldwide – aren’t reporting vulnerabilities, what hope is there that manufacturers of baby monitors will do so?

Why is it important that the developer of the software or the manufacturer of a device report the vulnerability themselves? The reason is simple: A very high percentage of vulnerabilities are reported by the developer or manufacturer. If they don’t report it, in most cases nobody else will. This means that, when a user of the device wants to learn about vulnerabilities found in the device, when they search a vulnerability database they will never find any.

I want to note that nobody is saying that any software developer or device manufacturer should report a vulnerability for which a patch isn’t yet available, except in extraordinary cases like when a vulnerability is already being widely exploited. In those extraordinary cases, user organizations like hospitals should be able to learn that products they use have that vulnerability, so they can mitigate the threat using other measures (like removing the device from their network altogether), pending availability of the patch.

At least one major medical device maker (MDM) has told me they report vulnerabilities in their devices on their customer portal so their customers can learn about them, although they admit this isn’t a foolproof method to keep this information out of bad hands. They may correlate one of those vulnerabilities with a published CVE number, but if they don’t report the vulnerability to CVE.org, a search on a public vulnerability database will never yield the fact that their device is vulnerable to that CVE.

Of course, this means that nobody other than a customer of a medical device can learn of vulnerabilities in it, and nobody (whether a customer or otherwise) will be able to compare competing devices to learn whether they have the same vulnerabilities. But of course, this might be a good thing. After all, if none of your competitors are reporting vulnerabilities (and there’s no way in most databases to tell the difference between a device that’s never had a vulnerability and one that’s loaded with them, but has never reported a single one to CVE.org), who wants to stand out by reporting them?

At our most recent OWASP SBOM Forum meeting, we were discussing this problem, and I said I didn’t think there was a good excuse for the fact that the MDMs aren’t usually reporting vulnerabilities in their devices. At that point, the device security manager for one of the most prestigious hospital organizations in the US provided a very good reason why the MDMs don’t report them (and I’ll point out that I’ve known this individual through the NTIA and CISA SBOM efforts since 2020. In general, he doesn’t trust MDMs as far as he can throw them):

1.      Hospitals, like many other organizations (although probably more so), are seriously backlogged in applying security patches. This is partly because, unlike a lot of organizations, it is very hard to bring a device down when it’s time to apply a patch, since they’re often hooked up to patients – and nobody wants to see a technician disconnect Grandma’s infusion pump to apply a patch!).

2.      If the MDM follows the usual practice of reporting a vulnerability only after they have released a patch for it, it’s likely there will be a significant time lag between the vulnerability report and when most devices are protected by the patch.

3.      Of course, this would pose a serious risk to patients. And I'll point out that the same reasoning applies to electronic relays that control the power grid, devices in pipeline pumping stations, etc.

But another serious risk to patients is being hooked up to a device that carries vulnerabilities that have been there for a year or two, whose existence has almost certainly become known to the bad guys by now. There needs to be some deadline by which the hospitals will either have to patch the device or take another mitigation step, like removing the device from their network altogether. Maybe that would be six months for vulnerabilities that aren’t being actively exploited, but three months for vulnerabilities that are on the CISA KEV (Key Exploitable Vulnerabilities) list.

If the hospital can’t meet those deadlines, they’ll have to invest in enough extra devices, so the hospital doesn’t have a vulnerable device sitting on their network forever, without anyone outside of the manufacturer and the hospital knowing about it.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Sunday, April 28, 2024

The NVD fades away


I didn’t know whether to laugh or cry when I saw the NVD’s most recent announcement (last week) about…what is this about, anyway? Here is what it said:

NIST maintains the National Vulnerability Database (NVD), a repository of information on software and hardware flaws that can compromise computer security. This is a key piece of the nation’s cybersecurity infrastructure.
There is a growing backlog of vulnerabilities submitted to the NVD and requiring analysis. This is based on a variety of factors, including an increase in software and, therefore, vulnerabilities, as well as a change in interagency support.
Currently, we are prioritizing analysis of the most significant vulnerabilities. In addition, we are working with our agency partners to bring on more support for analyzing vulnerabilities and have reassigned additional NIST staff to this task as well.
We are also looking into longer-term solutions to this challenge, including the establishment of a consortium of industry, government, and other stakeholder organizations that can collaborate on research to improve the NVD.
NIST is committed to its continued support and management of the NVD. Currently, we are focused on our immediate plans to address the CVE backlog, but plan to keep the community posted on potential plans for the consortium as they develop.

I ultimately decided that crying was the appropriate response. Because this announcement makes it clear that

1.      The NVD’s staff has no idea what the cause of their current problems is;

2.      They don’t have a plan for finding and fixing the problem; and

3.      They’re not at all concerned about the effect this is having on their remaining supporters (both of them), despite their proud assertion that the NVD is “a key piece of the nation’s cybersecurity infrastructure.”

You’ll notice the words “sorry” and “apologize” are nowhere to be found in this announcement. Don’t you think that the NVD staff should be concerned that this “key piece” they’ve been furnishing for so long is no longer working, and that a huge number of cybersecurity professionals worldwide, who had thought the NVD would always be around, are now officially SOL (that’s a technical acronym)? The absence of those two words with magical properties (at least according to my mother) tells the whole story.

Apologies are certainly due, especially for insulting our intelligence with these ridiculous assertions:

1. The problem is that there is “a growing backlog of vulnerabilities” – i.e., this problem has been gradually building over time. In fact, as Patrick Garrity’s analysis has shown, the NVD was “analyzing” CVE reports at roughly the same rate that new reports were appearing until February 12 of this year. On that day, the number of CVEs “awaiting analysis”, which had been effectively zero so far this year, started trending rapidly upward (the red line in the graph in this post). At the same time, the number of CVEs analyzed almost flatlined.

In other words, on February 12 the NVD’s analysis of CVE reports (which added the CPE names required to determine what product a CVE applies to. A CVE report without a machine-readable identifier for the affected products like CPE is literally like a car without a steering wheel. Yes, it will move, but you would be a fool to expect good results from doing that).

I checked back with Patrick last week and asked if there had been any significant change in the rate of new CVEs being analyzed (which had dropped to less than a tenth of the previous rate in February). Patrick had good news and bad news. The good news is that indeed, the rate at which new CVEs are being analyzed is not on a flat line anymore. The bad news is that the line is declining, not increasing – meaning that the NVD’s productivity has declined from the already low rate it’s been at since February. In fact, someone told me on Friday that the NVD had analyzed exactly one CVE during all of last week. I guess that alone is good news, since it means the rate can’t decline any further!

2. “This is based on a variety of factors…”. If it were really a variety of factors, the problem would have been building for some time, rather than occurring on one day in February.

3. “…including an increase in software and, therefore, vulnerabilities…” Of course! That must be the reason for this problem! After all, how could anyone have anticipated that the number of software and vulnerabilities would increase - other than the fact that it’s been increasing every year since programmable computers first appeared in the early 1950s?

4. “…as well as a change in interagency support.” Translation: NIST’s budget for the current fiscal year (which started last October) was cut by 12% in March, when the budget was finally approved by Congress. I agree it’s appalling that federal agencies like NIST don’t really know what their budget for the fiscal year is before the year is well underway; in fact, Tanya Brewer of NIST (the leader of the NVD project) told the OWASP SBOM Forum last spring that the NVD usually doesn’t get its share of the NIST fiscal year budget until the summer (i.e., 8 or 9 months into the fiscal year), even when the budget has been approved by Congress by the end of the calendar year (i.e., about 3 months into the fiscal year), as it usually is. However, since this situation happens every year recently, does this really explain the sudden drop-off in February?

5. “In addition, we are working with our agency partners to bring on more support for analyzing vulnerabilities and have reassigned additional NIST staff to this task as well.” This would be good news if the rate of vulnerabilities analyzed were increasing – but it’s not. It seems that the more people NIST throws at this problem, the worse the problem becomes! What they’re really saying is, “You won’t see positive results from the additional staff for some time – but just bear with us. And cross your fingers that one of the CVEs that’s still awaiting analysis isn’t the next log4shell.

6. “We are also looking into longer-term solutions to this challenge, including the establishment of a consortium of industry, government, and other stakeholder organizations that can collaborate on research to improve the NVD.” Ah, yes, this is my favorite part of the announcement – and it’s been in the announcement from when the first one was put up in late February (after the problem had been ongoing for a couple of weeks). “Don’t worry, the cavalry is on the way as we speak. They’ll be able to fix everything (even though we still don’t know what needs to be fixed).”

The idea of a Consortium came out of the SBOM Forum’s two discussions with Tanya last spring. In the first discussion (in April), we asked how we could help the NVD implement the recommendations in our 2022 white paper on the problems with CPE and how they can be solved. Her answer was that the best way would be through some sort of public-private partnership, but she needed to talk to the NIST lawyers to find out how that would work.

She talked to us again a month or two later and announced that the right way to do this would be to form a “Consortium” of organizations (including both private-sector and public-sector entities) interested in helping the NVD. She outlined a specific set of steps that would be required:

1.      She would post some videos in July, explaining her ideas for the Consortium and soliciting comments on those ideas.

2.      She would study these comments and draft her statement about the purpose of the Consortium, which would form the basis for the required posting in the Federal Register.

3.      She expected the notification to appear in the FR in August, at which point organizations could start contacting someone on the NVD team to discuss how to join the Consortium.

4.      There would be a number of steps required to join the Consortium, the first being NIST’s decision that they want your organization to participate (I guess you can’t take that for granted!).

5.      By far the most important of the steps was that the organization would have to negotiate a customized Cooperative Research and Development Agreement (CRADA) with NIST. Doing so would require agreeing on an area of expertise that the organization has that would benefit NIST, and on how NIST would be able to take advantage of that expertise.

Tanya told us (in June 2023) that she thought the Consortium would be up and running by the end of the year. We all thought that was wildly optimistic (the CRADA negotiation itself sounds like a six-month effort at least), but we appreciated that she wanted to do this.

Or at least, we thought she wanted to do this. I didn’t hear (or read) anything from Tanya on the Consortium until the NIST announcement in February that pointed to the Consortium as the solution to the NVD’s problems. A month after that announcement, Tanya stated at VulnCon (at the end of March, six weeks after the NVD’s problems started) that she expected an explanation for the NVD’s problems to be posted within ten days (she said they knew what they wanted to say but needed to get the required approvals from higher-ups). She also said she expected the announcement of the Consortium would appear in the Federal Register in two days.

Needless to say, neither of these promises was kept, any more than the ones Tanya made to the OWASP SBOM Forum last June. And there was no explanation or – heaven forbid! – apology for that fact.

People familiar with the workings of the federal government have told me that Tanya’s idea for a Consortium isn’t a bad one on its face, but they find it hard to believe the Consortium would be up and running before 2-3 years.

Moreover, what will the Consortium do when they meet? The current problem is almost certainly due to the fact that the NVD’s infrastructure is a couple of decades old and seems to be almost completely lacking in redundancy. What are they going to do to fix that, other than rip it out and replace it with something newer? If so, why wait for a Consortium to point out something that’s painfully obvious today?

Fortunately, I’m pleased to announce that the cavalry actually is on the way. They’re not flying the flag of the Consortium, but of CVE.org, the database (formerly known as MITRE, which still operates it) in which most of the data in the NVD originates. CVE.org will put up a blog post up within two weeks detailing what’s going on, but my last post should give you some idea of that (not that I’m privy to all of the details of course).

Even better, you can join the SBOM Forum’s Vulnerability Database working group, which will hold its first bi-weekly meeting on Tuesday April 30 at 11AM Eastern Time. We’ll be discussing (with at least one member of the CVE.org Board) what CVE.org offers today and what needs to be added, both to let it replace the functionality of the late, lamented NVD, but also to go beyond that. There are lots of things you can do if you’re not constrained by two decades of…stuff.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Friday, April 26, 2024

Maybe there’ll be a happy ending to the NVD story yet!

It seems almost normal that a French citizen would follow goings-on in the US government having to do with vulnerability management much better than US citizens – or at least the US citizen who writes this blog (and no, that US citizen’s name isn’t ChatGPT!). Since I connected with Jean-Baptiste Maillet (JB for short) on LinkedIn earlier this year, I’ve learned a lot of things from him about vulnerability management and the vagaries of CVE, CPE and other TLAs (three-letter acronyms).

Moreover, he has curious reading habits. Early this week, he put up a post on LinkedIn about the meeting minutes of the CVE.org board on April 3. He knew I’ve been speculating a lot recently that the CVE.org database (formerly called MITRE, and still operated by contractors from MITRE Corp.) would be a fairly easy substitute for the NVD. This is both because CVE.org is much more modern and fully redundant (neither of which adjectives applies to the NVD), and because it’s the original source of most of the data in the NVD.

Even given those facts, I’ve been cautious about predicting that CVE.org would replace the NVD as the US government’s go-to vulnerability database. I reasoned that, since the only “boss” over both the NVD and CVE is one Joseph Biden – and Mr. Biden seems to have more weighty issues on his mind nowadays than the travails of the software supply chain security industry – the likelihood that this switch would be made within, say, the next two years was quite low.

However, I was quite pleased by what JB reported from reading those minutes (which I’ve never even thought to read):

1. “The CVE Program will be reaching out to CNAs (the top 10 code-owning CNAs by number of publications) to make sure they are aware that they can submit enriched data (e.g., CPE, CWE, CVSS) directly to the CVE Program, rather than submitting it separately to the NVD.”

This is quite significant: The CVE Numbering Authorities (CNAs) create virtually all the CVE reports that go into CVE.org, the NVD and lots of other public and private databases worldwide. Until recently, the CNAs have been either not able or not allowed (depending on who you talk to) to create CPE names (CPE is the only way software can be identified in the NVD) and CVSS scores for their CVE reports.

NIST, which runs the NVD, for a long time discouraged the CNAs from creating CPEs. And if the CNA created a CVSS score, NIST would create their own score, almost always higher than what the CNA had created. What’s odd about this is that CNAs are often large software developers (Red Hat, Oracle, Microsoft, HPE, Schneider Electric, etc.) and most of the CVE reports they create are for their own products. Why should NIST not have allowed CNAs to name their own products, since I know some CNAs have complained that often the NIST people make mistakes in creating CPE names? Of course, this makes it difficult or impossible to find those products in the NVD (moreover, the developer gets blamed when that happens).

Of course, NIST can’t complain that CVE.org is abrogating their previous agreement allowing the NVD to create CPE names and CVSS scores, since the NVD for 9-10 weeks has pretty much flatlined when it comes to creating this data themselves.

2. CVE.org is going on a PR offensive (my term) to explain these changes to their constituents. Meanwhile, the NVD still hasn’t provided a word of explanation regarding what happened on the fateful day of February 12, when it seems a black hole opened up under the NIST headquarters in Gaithersburg, Maryland, from which the NVD has yet to emerge (not even in the form of Hawking Radiation!).

3. “…CNAs may not realize they can submit their data to the CVE Program via JSON 5.1 and then that data will roll into the NVD.” My ears (OK, my eyes) really perked up when I saw the magic number 5.1; I certainly hope that wasn’t a typo. For background, the CNAs submit CVE reports to CVE.org using the JSON data representation language, in a particular schema. That schema has changed through the years. The current version 5.0 schema was adopted by CVE.org more than two years ago and was just recently implemented by the NVD.

The 5.1 version is much improved, but has (or will have, anyway) one very important feature that the OWASP SBOM Forum requested two years ago: the capability to convey purl along with CPE identifiers. This will be a big deal, since purl is far superior to CPE as an identifier for open source software; see this paper written by the SBOM Forum in 2022.

However, this doesn’t mean you’ll be able to look up vulnerabilities using purl in CVE.org soon. First, the CNAs will have to be trained on creating purls, and even when the CNAs start adding purls to the CVE reports, each vulnerability database will need to support searches on purls (CVE.org will almost certainly support purl searches much earlier than the NVD does – assuming the NVD even adopts the 5.1 spec). However, at least we know purls will be coming.

To summarize, I’m quite pleased that CVE.org seems to be moving ahead to at least fill the gap caused by the NVD’s still-unexplained work slowdown (although “stoppage” might be an even better description of it). Maybe we won’t need to request that President Biden drop everything else he’s doing and negotiate a treaty between the Department of Commerce (which operates NIST and the NVD) and DHS (which operates CVE.org and CISA). In fact, maybe we’ll have a fully-functioning (and improved!) free government-operated vulnerability database within say 3-6 months, without requiring any extraordinary actions by either Department.

And this reminds me: The first meeting of the OWASP SBOM Forum’s Vulnerability Database Working group will be next Tuesday (April 30) at 11AM Eastern Time (which we hope will be workable for people on the West Coast, in Europe, and even in Israel – who mostly haven’t been able to attend the regular Friday SBOM Forum meetings, since Friday is the beginning of their weekend). We already have a diverse group signed up for the meetings, which will be held biweekly. If you are interested in joining the group (and being able to suggest improvements in documents we create, even if you can’t attend the meetings), drop me an email.

Here is my tentative agenda for the group, but the group will be able to suggest changes to that. I know one of our first topics will be what improvements need to be made to CVE.org for it to become the US government-operated vulnerability database (as opposed to being an Alternate Data Provider to the NVD, which is its current official role. Instead, the NVD would become an ADP to CVE.org). For example, I don’t believe there’s currently the capability to do one-off queries to CVE.org; this is certainly important for raising understanding of vulnerability management among the general public.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Tuesday, April 23, 2024

NERC CIP: My podcast on CIP and the cloud


Industrial Defender recently contacted me about doing a second podcast (the first was a couple of years ago) on a NERC CIP topic of my choosing. I jumped at the chance, since I consider the fact that NERC entities with medium and high impact CIP environments are in essence “forbidden” to utilize the cloud for some of their most important reliability and security workloads to be the biggest NERC CIP-related problem facing the power industry today.

Moreover, I have heard from multiple knowledgeable people in the NERC Regions that this problem is rapidly getting worse and that, if nothing is done about it in the next 2-3 years, there will likely be negative impacts to the security and reliability of the Bulk Electric System – due to the increasing number of software and services vendors that have announced they will soon only support cloud users.

The podcast was just posted, including a complete (and accurate) transcription of my conversation with Ray Lapena of ID. I’d like to hear any comments you have about his podcast.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Friday, April 19, 2024

Would you like to help figure out the best path(s) forward on vulnerability databases?

Many organizations in the software supply chain security community have assumed for years that the National Vulnerability Database (NVD), despite its various problems, is the de facto international standard for vulnerability databases. They also believe it can be relied on going forward to be the “bread and butter” database that meets most of the needs for most of the organizations involved with the community. However, the seeming inability of the NVD to fulfill that role since mid-February 2024, and the fact that there hasn’t even been an attempt to explain what the problem is, have made it clear this is no longer a good assumption.

In the wake of this event, there are three questions that need to be answered. The Vulnerability Database Project of the OWASP SBOM Forum proposes to develop answers to these three questions, based on discussions that will be open to all parties concerned with software supply chain security in general and vulnerability management in particular. If you would like to participate in weekly discussions and help create a document addressing the first two of these questions, and/or if your organization can support this effort through a donation to OWASP (a 501(c)(3) non-profit corporation), please drop me an email at tom@tomalrich.com.

The first question is, “What options are available for NVD users, both to replace services they have been counting on from the NVD and to go beyond what the NVD has traditionally offered?” There are many other vulnerability databases available, both free and for charge. These provide one or more services that the NVD has provided, but also go beyond the NVD in various ways. Questions include:

1.        What are these other databases?

2.        How do their offerings map to what the NVD has been offering?

3.        What are their offerings that go beyond the NVD?

4.        In what ways do they differ from the NVD, for example in vulnerability identifiers supported (CVE, OSV, GitHub Security Advisories, etc.), software identifiers supported (CPE, purl, or other), and types of products supported (open source software projects, proprietary products, intelligent devices – as well as sub-categories of these)?

5.        Given that most of these alternative databases do not cover the entire range of what the NVD covers, how can NVD users “mix and match” the different offerings so that, depending on their individual needs, they end up with at least the same level of functionality they previously received from the NVD and hopefully a lot more? And without ending up with a hopeless mishmash of incompatible vulnerability data?

6.        Given that the CVE.org database is the original source of most of the data in the NVD and that its infrastructure is much more robust and modern than the NVD’s, how hard would it be for current NVD users to switch over to using CVE.org as their primary vulnerability database – as one major NVD user has recently done? What could be added to CVE.org to facilitate this switch, such as a more end-user-friendly front end? What would be the advantages of using CVE.org over the NVD, including much-sooner support for purl and the fact that the originators of all CVE data – the 300+ CVE Numbering Authorities (CNAs) – are part of CVE.org? 

The second question is, “What steps should the US government take with respect to this problem? These might include:

1.        Doing nothing and hoping the NVD has a miraculous recovery.

2.        Actively investing in the NVD’s infrastructure, which will probably require a complete rebuild from scratch.

3.        In place of the current situation, in which the NVD is the primary vulnerability database and CVE.org is just an “alternate data provider (ADP)” to the NVD, turn this situation around so that CVE.org is the primary database and the NVD is an ADP.

4.        Get out of the vulnerability database business and leave that to the private sector, while maintaining CVE.org as by far the leading provider of vulnerability data – including investing heavily in the CNAs, given their irreplaceable role in the vulnerability identification process worldwide. 

The third question is, “What is the best long-term solution to the vulnerability database problem worldwide?" While there can be many views, Tom believes the following are “self-evident truths” (with apologies to Thomas Jefferson):

1.        Requiring a single uber vulnerability database (“One Database to Rule Them All”) that will somehow gather, harmonize and synchronize data from all other databases is a concept whose time has come and gone. There are many vulnerability databases operated in different ways by different organizations. Let them all continue to operate as they always have. Instead, there needs to be an AI-powered central “switching hub”, which might be called the Global Vulnerability Database (GVD). Queries to the GVD could use any major type of software and vulnerability identifier; the hub would route each query to the most appropriate database or databases and route the response(s) back to the end user. It would also harmonize the responses when needed.

2.        Of course, the GVD needs to be a truly global effort. It cannot be under the control of any single government or private sector organization, although all governments and organizations will be welcome to contribute to it (Tom believes that raising funds to create and maintain this “database” won’t be hard at all, given that nobody but US taxpayers is currently allowed to contribute to the NVD. It isn’t at all surprising that the NVD is chronically underfunded, despite being used worldwide).

3.        Developing the GVD will require a nonprofit organization to manage the process. When (and if) the GVD is running smoothly, operation of the database might be turned over to an organization like the Internet Assigned Numbers Authority (IANA), which manages IP addresses and DNS. Otherwise, the nonprofit organization would continue to operate the GVD in perpetuity.

The SBOM Forum Vulnerability Database Project will initially focus on the first two questions. The group will collaborate on one or more documents to answer these questions. Rather than wait until the documents are complete, the group will publish a current draft every two months, to maintain interest in the project among the software security community and to invite feedback on the work so far.

When the first two questions are answered and the results have been published, the group can start work on the third question. Since the end result of that effort might be a workable design for the GVD, that effort could easily take multiple years.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


Wednesday, April 17, 2024

Everything you always wanted to know about VEX (and TEA), but were afraid to ask


Two weeks ago, Steve Springett (leader of the OWASP CycloneDX and Dependency Track projects, and recently elected OWASP board member) and I recorded a podcast with Deb Radcliff, whose podcasts are widely followed in the software development community and are sponsored by CodeSecure. The podcast is called “VEXing SBOMs”, and you can find it here. Briefly, here are the main topics that we covered:

1.      We discussed use cases for SBOM and VEX.

2.      Steve discussed how SBOMs have become a natural part of the build pipeline.

3.      I pointed out that IMHO the number one reason why SBOMs are not being distributed to and used by software end users (i.e., the 99.9% - or so - of public and private organizations worldwide whose primary business is not software development) is the fact that there are currently no strict specifications for VEX on the two original VEX “platforms”: Common Security Advisory Framework (CSAF) and CycloneDX.

4.      I also noted that Anthony Harrison of the OWASP SBOM Forum has recently remedied that problem. This is a key step toward the goal that the SBOM Forum hopes to achieve before the end of 2024: starting a proof of concept in which end users benefit from the “full stack” of software component vulnerability management, namely utilization of SBOM and VEX to allow end users to learn about exploitable component vulnerabilities in their software, and ultimately to be able to quickly answer the question, “Where on our network are we vulnerable to (insert name of “celebrity vulnerability” du jour)?” You can read more about the proof of concept in Part 3 of my book (see below).

5.      Steve described the OWASP Transparency Exchange API project, which is described in this draft document. In my opinion, this will be the key enabler of distribution and use of SBOMs and VEX documents.

Thanks for inviting us, Deb!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.