Monday, May 13, 2024

How do we replace the NVD? Should we replace it?

At the OWASP SBOM Forum’s weekly meeting on Friday, we heard what the NVD’s big problem is: It no longer has funding. It turned out that the collapse on February 12 really was due to a funding shortfall – a massive shortfall, although perhaps not all the way to zero. Some people had suggested that the problem was cutbacks by NIST. I I pooh-poohed that idea, since budgets are set before the beginning of the government’s fiscal year, which is October 1.

However, it turns out I didn’t know where the lion’s share of the NVD’s funding comes from. And guess what? It doesn’t come from NIST. It was that other source that abruptly cut off the NVD’s funding in February. At that point, the NVD probably had to release some staff members to join other projects at NIST (as I’m sure you know, NIST does lots of work in cybersecurity. There probably are always openings on other projects. I doubt anyone from the NVD was put on the street, unless they decided to quit). The NVD kept “enriching” a small number of CVEs, but at the meeting on Friday, someone (I think Andrey Lukashenkov of Vulners) said they hadn’t done any enriching for ten days.

This means they (i.e., whoever is still working for the NVD) are now just trying to maintain the database they have. This means that, if you search the NVD for vulnerabilities applicable to a product you use, don’t expect to find any recent vulnerabilities. Of course, those are the ones that most users are concerned about. The NVD without data on recent vulnerabilities is about as useful as a car without a steering wheel.

Why did that other source cut off their funding? I don’t know, and finding out isn’t a big priority for me. What is a priority is deciding where the software security community goes from here, as well as what options are available to organizations concerned about identifying vulnerabilities in the software they use.

As luck would have it, I’d realized a couple of months ago, after the NVD’s problems became apparent, that there were too many individual threads that needed to be pulled for there to be succinct answers to these questions; therefore, there needed to be a group effort to pull the threads and put together a coherent picture. This is why I formed the Vulnerability Database Working Group of the OWASP SBOM Forum. I summarized my idea of the group’s agenda in this post. I still think it’s a good agenda, although the group will decide what we want to discuss and what document(s) we want to produce – and those ideas are likely to change as we go along.

However, there has been one significant change since I wrote that post. Then, it was unclear what (if anything) CVE.org would do to step up to replace the NVD. It’s now clear that they are doing a lot. There have been two important changes:

The first change is that CVE Numbering Authorities (aka CNAs. They include groups like GitHub and the Linux kernel, as well as software developers like Red Hat and Microsoft, who report vulnerabilities in their own products as well as other products that are in their scope. The CNAs report to CVE.org) will now be encouraged to include in their CVE reports the CPE name for the product, the CVSS score and the CWEs. As the announcement by CVE.org points out, “This means the authoritative source (within their CNA scope) of vulnerability information — those closest to the products themselves — can accurately report enriched data to CVE directly and contribute more substantially to the vulnerability management process.” It never made sense for the NVD to create these items – or override what the CNA had created, which often happened.

The other significant change is that CVE.org now supports what was previously called v5.1 of the CVE JSON specification, but is now called the “CVE Record Format v5.1”. In early 2022, Steve Springett and Tony Turner of the OWASP SBOM Forum submitted a pull request to CVE.org to get purl identifiers included in the CVE spec. We missed the v5.0 spec, but made it into v5.1. This means that, with some training, CNAs will now be able to include purls in their CVE reports, although they may still have to include CPEs, and software users (as well as developers) will bee able to find vulnerabilities for open source products using purls (this will also be a boon to open source software vulnerability databases like OSS Index and OSV).

Our reasons for advocating purl were described in this white paper, which we published in September 2022 (see especially pages 4-6, where we describe some of the many problems with CPE). In that paper, we also described a way to handle proprietary software in purl (since purl currently applies exclusively to open source software), based on use of SWID tags – but the mechanism for making the SWID tags known to software users was left open. This isn’t a technical problem but an organizational one, since it might require creating a spec for software developers to follow when they want to advertise SWID tags for their products. It would be nice to see that done sometime, but I don’t have the time to lead it now.

The third topic in that paper was an identifier for intelligent devices, since CPE is deficient in that area as well. We suggested the GTIN and GMN identifiers, now used widely in global trade. I didn’t think we were even going to push those identifiers at the time, yet it seems that Steve and Tony must have included them in the pull request, because CVE says they’re supported in 5.1 as well! If CNAs start including these in CVE reports for devices, that might significantly change the whole “discipline” of vulnerability management for devices, which is frankly not making much progress now. This is because only a very small percentage of device makers report vulnerabilities in their devices to CVE.org today.

I’m encouraged that CVE.org is picking up their game. However, an organization that currently uses the NVD exclusively for vulnerability data would not be well advised to simply switch their total dependence over to CVE.org, since the “front end” capabilities needed for those organizations to make use of the data are less robust in CVE than they are (or were) in the NVD.

We will be discussing these and similar issues in the OWASP Vulnerability Database Working Group in our meetings tomorrow and every two weeks thereafter. If you would like to join the group, drop me an email.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Saturday, May 11, 2024

NERC CIP and the cloud: The Regions clearly state the problem

I and other consultants have been pointing out the serious problems being caused by the fact that the current CIP standards effectively prohibit use of the cloud for medium and high impact systems. This is not because the requirements explicitly forbid cloud use – indeed, the current requirements say nothing at all about the cloud - but because cloud service providers would never be able to provide the evidence required for a NERC entity to prove compliance with prescriptive CIP requirements like CIP-007 R2 and CIP-010 R1.

Now, two highly respected staff members from the Regional Entities, Lew Folkerth of RF and Chris Holmquest of SERC, have written an excellent document that states the problem very clearly. It was published in the newsletters of both Regions: SERC (https://serc1.org/docs/default-source/outreach/communications/serc-transmission-newsletter/2024-newsletters/2024-05-newsletter/n-the-emerging-risk-of-not-using-cloud-services_2024-05.pdf?sfvrsn=e6923606_2 ) and RF (https://www.rfirst.org/resource-center/the-emerging-risk-of-not-using-cloud-services/ ).

I am reproducing the article below with permission from both Regions. I have highlighted two paragraphs I think are especially important. The one problem I have with this document is that it doesn’t emphasize the most serious issue here: It will likely take at a minimum 5-6 years for the new standard(s) to be developed and approved, along with changes to the NERC Rules of Procedure that will probably be required, and for all of this to take effect.

Yet, given the accelerating pace at which software and service providers are moving to the cloud, it is likely there will be negative impacts to grid security and reliability within a couple of years. There will probably need to be a shortcut available soon that will allow some NERC entities to continue using software and services (especially cybersecurity services) they would otherwise have to stop using.  We don’t want the NERC CIP standards, which were implemented to improve grid reliability and security (and have done so, to be sure) to become the biggest impediment to that improvement.

If you have questions for either Region, you can get them answered here: SERC (https://www.serc1.org/contact-us) and RF (https://www.rfirst.org/contact-us/). Also note that the project page for the new NERC “Risk Management for Third Party Cloud Services” standards development project – which will begin operations in Q3 – has been created. You should bookmark this and check it regularly.  

 

THE EMERGING RISK OF NOT USING CLOUD SERVICES

By: Chris Holmquest, Senior Reliability and Security Advisor, SERC Lew Folkerth, Principal Reliability Consultant, RF

In the ERO, we are seeing forces that foretell an inevitable move to cloud-based services for many operational technology (OT) applications and services. Cloud technology has been advancing for many years, and software and service vendors are now migrating their products to take advantage of this new technology. Even when our industry addresses the security concerns of this migration, there will still be compliance concerns. We will share the efforts underway to identify the risks to reliability, security, and compliance that our industry must address before we can move forward in this area.

Security challenges for on-premises OT systems

Vendors of security monitoring, asset management, work management, and other essential services are moving toward cloud-based services at a very rapid pace with their applications and infrastructure. This brings a new risk to light: soon we may be seeing end-of-life notices for on-premises systems, which translates to lessened or non-existing support, including security patches. Some members of our industry have already observed that new and important features are being implemented only in the cloud-based offerings.

Entities are looking at the potential benefits that cloud-based software and services can bring. As entities in our industry are challenged to acquire sufficient resources to manage their reliability, security, and compliance risks, cloud services can offer attractive solutions to manage these risks while lowering costs in capital investment and support.

Moving to the cloud presents risks as well, not the least of which is being confident that your systems and data are secure. Even when you are confident in the security of your systems and data, you will still face compliance risks. 

Compliance challenges for OT cloud services

The use of cloud services will not be possible for high and medium impact BES Cyber Systems under the present CIP Standards because compliance risk will be increased beyond an acceptable level, except for BES Cyber System Information in the cloud. New Reliability Standards will be required, and those standards will need to be risk-based. There are too many variables in cloud environments to be able to write prescriptive standards for these cases.

Your compliance processes will need to be very mature and integrated with operational processes and procedures. Internal controls will become even more important.

Auditing processes will need to be adapted to cloud environments to determine the type, quality and quantity of evidence that will be needed to provide reasonable assurance of compliance. 

The path forward

There are efforts underway to help with this complex dilemma. We are looking at these various issues and have formed an ad-hoc team of Electric Reliability Organization and Federal Energy Regulatory Commission staff, cloud service provider vendors, industry consultants, training experts, and electric industry security, compliance, and management personnel. This team is providing ad-hoc support to other existing groups working to advance the use of cloud technologies. So far, these efforts include work on a series of industry webinars to address issues with using cloud in our OT and CIP environments. Awareness of cloud technologies for our systems is crucially important, and these webinars will be designed for a broad audience. Efforts also include a field test of a cloud-based system and investigating third-party assessments, which may be essential to accommodate the CIP Standards with a cloud system.

There is a formal NERC subcommittee under the Reliability and Security Technical Committee called the Security Integration and Technology Enablement Subcommittee (SITES). Registered entity staff and vendors are members of this group, and they have published a white paper called “BES Operations in the Cloud” that we recommend.

A SITES sub-team, New Technology Enablement (NTE), is in the process of creating a series of white papers to help move the standards development effort from a stance that follows technology developments after the fact, to a leading process where standards development is part of early adoption of applicable technologies. The goal of NTE is to enable use of the best available tools and techniques in our most critical systems. Their first effort will be a paper titled “New Technology Enablement and Field Testing.” 

Getting involved

The ability to use cloud services to reduce security risk and to improve reliability and resilience is important to the future of our industry.

We suggest that you read the SITES white paper and consider volunteering to participate in the SITES and/or NTE groups if you would like to contribute.

SANS, the well-known security training organization, will be hosting the series of webinars mentioned above. Please watch for the announcements for these webinars. Also, there is a recorded SANS Summit Panel discussion (link below) of this risk and possible directions forward.

A new standards development project, Risk Management for Third-Party Cloud Services, has been established (see link below). This project is scheduled to become active in the third quarter of 2024.

Please stay abreast of these developments and consider how your knowledge and industry experience can contribute to these efforts. 

References

• Security Integration and Technology Enablement Subcommittee (SITES)

• White paper: BES Operations in the Cloud

• SANS Summit Panel – We Hear You Cloud and Clear

• 2023-09 Project – Risk Management for Third-Party Cloud Services

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Thursday, May 9, 2024

Is it time to seriously discuss the Global Vulnerability Database?


Last August, I wrote a post called “A Global Vulnerability Database”. In the post, I wrote about a disappointing experience the SBOM Forum (now the OWASP SBOM Forum) had recently had with Tanya Brewer of NIST, the leader of the NVD. We had reached out to her in early 2023 and offered to help the NVD, especially regarding implementing the recommendations in the white paper we published in 2022 on how to fix the NVD’s problems with software naming.

We first met with Tanya in April 2023. She said then that she’d like to put together a “public/private partnership” so that we and one or two private corporations – who had also offered to help them – would have a structured way to do that. When she talked to us again in June, she had worked out with NIST’s lawyers the idea for a “Consortium” which organizations could join to help the NVD – although it seemed the main help she wanted was for companies to provide her warm bodies, which were attached to minds that knew something about coding.

Those bodies would need to be ensconced at the NVD for at least six months. The first couple of months would be spent learning some obscure programming language I’d never heard of (which says something, since I wrote my first couple of programs on punch cards). Offhand, suggesting that young, ambitious coders spend 6 months immersed in working with a decades-old language, probably used nowhere but the NVD, didn’t seem to me to be a wise career suggestion. However, we all appreciated her enthusiasm.

She described in detail her plans to announce the Consortium in June, have it published in the Federal Register in August, and get it up and running by the end of the year. Those plans seemed very unrealistic but again, we appreciated her spirit.

However, that spirit seemed to dissipate quickly; in fact, the next we heard about the Consortium was in NIST’s announcements about the NVD in late February, when NIST pointed to the Consortium as the cavalry that was at that minute galloping through the sagebrush on their way to rescue the NVD from its problems. However, it seems the cavalry lost its way, because – despite Tanya’s promise at VulnCon in late March that the Consortium would be announced in the Federal Register imminently – nothing more has appeared on that front (I think the cavalry got enticed by Las Vegas when they galloped through it. They haven’t been heard from since).

By the time I wrote the post last August, I was already disappointed that we hadn’t heard back at all from Tanya, and I began to wonder if just maybe the NVD wasn’t quite the pillar of the cybersecurity community that we believed it was.

This was why I started thinking more about an idea we’d talked about a few times in the SBOM Forum: the need for a vulnerability database that wouldn’t be subject to the vagaries of government funding, in an era when a bill to name a post office after George Washington might prove too controversial to get through Congress and the US has to decide every couple of years whether it wants to pay its debts at all.

In our discussions, we had agreed:

1.      This should be designed as a global database from the start. Governments and private sector organizations worldwide would be welcome to contribute to it, but we didn’t want the database to be dependent on one primary source of funding, especially a government one. Been there, done that, got the T-shirt.

2.      The database should follow the naming conventions we suggested in our 2022 white paper, or something close to them. Of course, there are still a lot of details to be filled out regarding those suggestions.

3.      Of course, putting all this data in one database, and maintaining it over time, would require a huge effort and a lot of support (including financial support). However, given the fact that so many organizations worldwide have been using the NVD for free for many years, and had an overall good experience, without once being asked to contribute a dime, I was – and remain – sure that the support will be there when we ask for it. My guess is there’s a lot of pent-up “contributors’ demand” that needs to be satisfied.

4.      The database would need to be developed and implemented by a separate non-profit corporation, but once it was up and running smoothly, it could be turned over to an international organization like IANA, which assigns IP addresses and runs DNS (how many hundreds or thousands of times do you use IANA’s services every day, without once thinking about who enables all of that?).

While I didn’t write about the GVD again until November of last year, I kept thinking about it. One thing I realized was that the idea of a “harmonized” database – with a single type of identifier for software and devices, as well as a single identifier for vulnerabilities (many people don’t realize there is any other vulnerability identifier besides CVE, but there are many of them, albeit most not widely used) was not only very difficult to implement but also completely misses the point: There are diverse identifiers because their subject matter (in this case, software products and intelligent devices, as well as vulnerabilities) is diverse.

A good example of this is purl vs. CPE as an identifier for open source software (currently, purl only works with OSS, although it has literally taken over the OSS world). A single OSS product and version may be available in different package managers or code repositories, sometimes with slight differences in the underlying code. The purl for each package manager and repository will be different from the others (since the mandatory purl “Type” field is based on the package manager). Yet, there will be only one CPE in the NVD and it won’t refer to the package manager, since there’s no CPE field for that.

Thus, there can never be a one-to-one “mapping” of CPEs to purls (or vice versa), meaning harmonization of OSS identifiers in a single database would be impossible. Similar considerations make it impossible to have a single identifier for vulnerabilities, with all other identifiers mapped to it.

But I also knew that harmonization of identifiers – while probably being an absolute requirement of database technology in say the 1970s – was hardly necessary today, since the last time I checked it was 2024. Today, a single database can very easily have multiple identifiers for single products, without breaking a sweat.

But that led to another idea: Why do we need a single database at all? After all, since the different identifiers are often used in specialized databases dedicated to particular types of vulnerabilities or software, and since these databases are often compiled by people with specialized knowledge of the contents, it would be a shame to try to homogenize those data types and especially those people. Instead of the richness and diversity available today, we would get a bland product produced by bland people, in which a lot of the detail had been flushed away in the name of harmonization.

However, even though we do have a rich variety of vulnerability databases available today, it’s hard for most people to understand how they can be used together (the SBOM Forum’s Vulnerability Database Project will be cataloging them and providing suggestions on how they can be used together, since in the near term there will be no single database available with the scope of the NVD; we may start discussing the GVD in a few months. If you’re interested in joining that group – which meets bi-weekly on Tuesdays at 11AM ET – drop me an email).

That’s why there needs to be a single intelligent front end that directs queries to the appropriate database(s). That front end will be called the GVD, but – and please keep this a close secret between you and me – it will in fact simply be like the Wizard of Oz: a single man standing behind the curtain, turning knobs and pulling levers to give the impression of a single, massively efficient database. Pretty slick, no?

So, I wrote this post in November. I’m pleased to say that it was met with massive, universal…indifference. Of course, that happens to a lot of my posts. I didn’t expect anything different this time, since there was nothing on the horizon that might make people think we needed to start thinking about a different long-term database solution…

…Until February 12, when the NVD seems to have been swallowed by a black hole – and three months later, we still don’t even have a coherent explanation for the problem, let alone the beginning of a fix. Early this week, Brian Martin posed a long, thoughtful analysis of my November post on LinkedIn. He raised some very good questions. I promised to answer them in a few days, but I then decided I’d like to put up a post that explains how I and other members of the OWASP SBOM Forum came up with the idea for the GVD.

Here’s that post. I’ll put up a new post with the answers to Brian’s questions by early next week. Please contain your excitement until then.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Monday, May 6, 2024

It seems I was wrong about medical devices

In January, I pointed out that at least eight of the top ten medical device manufacturers worldwide have either never reported a vulnerability to CVE.org or have only reported a small number of them. I said this not as direct criticism of medical device makers, but as illustration of the fact that intelligent device makers of all types are mostly not reporting vulnerabilities at all. After all, if medical device manufacturers – which are subject to the most stringent cybersecurity regulations worldwide – aren’t reporting vulnerabilities, what hope is there that manufacturers of baby monitors will do so?

Why is it important that the developer of the software or the manufacturer of a device report the vulnerability themselves? The reason is simple: A very high percentage of vulnerabilities are reported by the developer or manufacturer. If they don’t report it, in most cases nobody else will. This means that, when a user of the device wants to learn about vulnerabilities found in the device, when they search a vulnerability database they will never find any.

I want to note that nobody is saying that any software developer or device manufacturer should report a vulnerability for which a patch isn’t yet available, except in extraordinary cases like when a vulnerability is already being widely exploited. In those extraordinary cases, user organizations like hospitals should be able to learn that products they use have that vulnerability, so they can mitigate the threat using other measures (like removing the device from their network altogether), pending availability of the patch.

At least one major medical device maker (MDM) has told me they report vulnerabilities in their devices on their customer portal so their customers can learn about them, although they admit this isn’t a foolproof method to keep this information out of bad hands. They may correlate one of those vulnerabilities with a published CVE number, but if they don’t report the vulnerability to CVE.org, a search on a public vulnerability database will never yield the fact that their device is vulnerable to that CVE.

Of course, this means that nobody other than a customer of a medical device can learn of vulnerabilities in it, and nobody (whether a customer or otherwise) will be able to compare competing devices to learn whether they have the same vulnerabilities. But of course, this might be a good thing. After all, if none of your competitors are reporting vulnerabilities (and there’s no way in most databases to tell the difference between a device that’s never had a vulnerability and one that’s loaded with them, but has never reported a single one to CVE.org), who wants to stand out by reporting them?

At our most recent OWASP SBOM Forum meeting, we were discussing this problem, and I said I didn’t think there was a good excuse for the fact that the MDMs aren’t usually reporting vulnerabilities in their devices. At that point, the device security manager for one of the most prestigious hospital organizations in the US provided a very good reason why the MDMs don’t report them (and I’ll point out that I’ve known this individual through the NTIA and CISA SBOM efforts since 2020. In general, he doesn’t trust MDMs as far as he can throw them):

1.      Hospitals, like many other organizations (although probably more so), are seriously backlogged in applying security patches. This is partly because, unlike a lot of organizations, it is very hard to bring a device down when it’s time to apply a patch, since they’re often hooked up to patients – and nobody wants to see a technician disconnect Grandma’s infusion pump to apply a patch!).

2.      If the MDM follows the usual practice of reporting a vulnerability only after they have released a patch for it, it’s likely there will be a significant time lag between the vulnerability report and when most devices are protected by the patch.

3.      Of course, this would pose a serious risk to patients. And I'll point out that the same reasoning applies to electronic relays that control the power grid, devices in pipeline pumping stations, etc.

But another serious risk to patients is being hooked up to a device that carries vulnerabilities that have been there for a year or two, whose existence has almost certainly become known to the bad guys by now. There needs to be some deadline by which the hospitals will either have to patch the device or take another mitigation step, like removing the device from their network altogether. Maybe that would be six months for vulnerabilities that aren’t being actively exploited, but three months for vulnerabilities that are on the CISA KEV (Key Exploitable Vulnerabilities) list.

If the hospital can’t meet those deadlines, they’ll have to invest in enough extra devices, so the hospital doesn’t have a vulnerable device sitting on their network forever, without anyone outside of the manufacturer and the hospital knowing about it.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.