Sunday, April 28, 2024

The NVD fades away


I didn’t know whether to laugh or cry when I saw the NVD’s most recent announcement (last week) about…what is this about, anyway? Here is what it said:

NIST maintains the National Vulnerability Database (NVD), a repository of information on software and hardware flaws that can compromise computer security. This is a key piece of the nation’s cybersecurity infrastructure.
There is a growing backlog of vulnerabilities submitted to the NVD and requiring analysis. This is based on a variety of factors, including an increase in software and, therefore, vulnerabilities, as well as a change in interagency support.
Currently, we are prioritizing analysis of the most significant vulnerabilities. In addition, we are working with our agency partners to bring on more support for analyzing vulnerabilities and have reassigned additional NIST staff to this task as well.
We are also looking into longer-term solutions to this challenge, including the establishment of a consortium of industry, government, and other stakeholder organizations that can collaborate on research to improve the NVD.
NIST is committed to its continued support and management of the NVD. Currently, we are focused on our immediate plans to address the CVE backlog, but plan to keep the community posted on potential plans for the consortium as they develop.

I ultimately decided that crying was the appropriate response. Because this announcement makes it clear that

1.      The NVD’s staff has no idea what the cause of their current problems is;

2.      They don’t have a plan for finding and fixing the problem; and

3.      They’re not at all concerned about the effect this is having on their remaining supporters (both of them), despite their proud assertion that the NVD is “a key piece of the nation’s cybersecurity infrastructure.”

You’ll notice the words “sorry” and “apologize” are nowhere to be found in this announcement. Don’t you think that the NVD staff should be concerned that this “key piece” they’ve been furnishing for so long is no longer working, and that a huge number of cybersecurity professionals worldwide, who had thought the NVD would always be around, are now officially SOL (that’s a technical acronym)? The absence of those two words with magical properties (at least according to my mother) tells the whole story.

Apologies are certainly due, especially for insulting our intelligence with these ridiculous assertions:

1. The problem is that there is “a growing backlog of vulnerabilities” – i.e., this problem has been gradually building over time. In fact, as Patrick Garrity’s analysis has shown, the NVD was “analyzing” CVE reports at roughly the same rate that new reports were appearing until February 12 of this year. On that day, the number of CVEs “awaiting analysis”, which had been effectively zero so far this year, started trending rapidly upward (the red line in the graph in this post). At the same time, the number of CVEs analyzed almost flatlined.

In other words, on February 12 the NVD’s analysis of CVE reports (which added the CPE names required to determine what product a CVE applies to. A CVE report without a machine-readable identifier for the affected products like CPE is literally like a car without a steering wheel. Yes, it will move, but you would be a fool to expect good results from doing that).

I checked back with Patrick last week and asked if there had been any significant change in the rate of new CVEs being analyzed (which had dropped to less than a tenth of the previous rate in February). Patrick had good news and bad news. The good news is that indeed, the rate at which new CVEs are being analyzed is not on a flat line anymore. The bad news is that the line is declining, not increasing – meaning that the NVD’s productivity has declined from the already low rate it’s been at since February. In fact, someone told me on Friday that the NVD had analyzed exactly one CVE during all of last week. I guess that alone is good news, since it means the rate can’t decline any further!

2. “This is based on a variety of factors…”. If it were really a variety of factors, the problem would have been building for some time, rather than occurring on one day in February.

3. “…including an increase in software and, therefore, vulnerabilities…” Of course! That must be the reason for this problem! After all, how could anyone have anticipated that the number of software and vulnerabilities would increase - other than the fact that it’s been increasing every year since programmable computers first appeared in the early 1950s?

4. “…as well as a change in interagency support.” Translation: NIST’s budget for the current fiscal year (which started last October) was cut by 12% in March, when the budget was finally approved by Congress. I agree it’s appalling that federal agencies like NIST don’t really know what their budget for the fiscal year is before the year is well underway; in fact, Tanya Brewer of NIST (the leader of the NVD project) told the OWASP SBOM Forum last spring that the NVD usually doesn’t get its share of the NIST fiscal year budget until the summer (i.e., 8 or 9 months into the fiscal year), even when the budget has been approved by Congress by the end of the calendar year (i.e., about 3 months into the fiscal year), as it usually is. However, since this situation happens every year recently, does this really explain the sudden drop-off in February?

5. “In addition, we are working with our agency partners to bring on more support for analyzing vulnerabilities and have reassigned additional NIST staff to this task as well.” This would be good news if the rate of vulnerabilities analyzed were increasing – but it’s not. It seems that the more people NIST throws at this problem, the worse the problem becomes! What they’re really saying is, “You won’t see positive results from the additional staff for some time – but just bear with us. And cross your fingers that one of the CVEs that’s still awaiting analysis isn’t the next log4shell.

6. “We are also looking into longer-term solutions to this challenge, including the establishment of a consortium of industry, government, and other stakeholder organizations that can collaborate on research to improve the NVD.” Ah, yes, this is my favorite part of the announcement – and it’s been in the announcement from when the first one was put up in late February (after the problem had been ongoing for a couple of weeks). “Don’t worry, the cavalry is on the way as we speak. They’ll be able to fix everything (even though we still don’t know what needs to be fixed).”

The idea of a Consortium came out of the SBOM Forum’s two discussions with Tanya last spring. In the first discussion (in April), we asked how we could help the NVD implement the recommendations in our 2022 white paper on the problems with CPE and how they can be solved. Her answer was that the best way would be through some sort of public-private partnership, but she needed to talk to the NIST lawyers to find out how that would work.

She talked to us again a month or two later and announced that the right way to do this would be to form a “Consortium” of organizations (including both private-sector and public-sector entities) interested in helping the NVD. She outlined a specific set of steps that would be required:

1.      She would post some videos in July, explaining her ideas for the Consortium and soliciting comments on those ideas.

2.      She would study these comments and draft her statement about the purpose of the Consortium, which would form the basis for the required posting in the Federal Register.

3.      She expected the notification to appear in the FR in August, at which point organizations could start contacting someone on the NVD team to discuss how to join the Consortium.

4.      There would be a number of steps required to join the Consortium, the first being NIST’s decision that they want your organization to participate (I guess you can’t take that for granted!).

5.      By far the most important of the steps was that the organization would have to negotiate a customized Cooperative Research and Development Agreement (CRADA) with NIST. Doing so would require agreeing on an area of expertise that the organization has that would benefit NIST, and on how NIST would be able to take advantage of that expertise.

Tanya told us (in June 2023) that she thought the Consortium would be up and running by the end of the year. We all thought that was wildly optimistic (the CRADA negotiation itself sounds like a six-month effort at least), but we appreciated that she wanted to do this.

Or at least, we thought she wanted to do this. I didn’t hear (or read) anything from Tanya on the Consortium until the NIST announcement in February that pointed to the Consortium as the solution to the NVD’s problems. A month after that announcement, Tanya stated at VulnCon (at the end of March, six weeks after the NVD’s problems started) that she expected an explanation for the NVD’s problems to be posted within ten days (she said they knew what they wanted to say but needed to get the required approvals from higher-ups). She also said she expected the announcement of the Consortium would appear in the Federal Register in two days.

Needless to say, neither of these promises was kept, any more than the ones Tanya made to the OWASP SBOM Forum last June. And there was no explanation or – heaven forbid! – apology for that fact.

People familiar with the workings of the federal government have told me that Tanya’s idea for a Consortium isn’t a bad one on its face, but they find it hard to believe the Consortium would be up and running before 2-3 years.

Moreover, what will the Consortium do when they meet? The current problem is almost certainly due to the fact that the NVD’s infrastructure is a couple of decades old and seems to be almost completely lacking in redundancy. What are they going to do to fix that, other than rip it out and replace it with something newer? If so, why wait for a Consortium to point out something that’s painfully obvious today?

Fortunately, I’m pleased to announce that the cavalry actually is on the way. They’re not flying the flag of the Consortium, but of CVE.org, the database (formerly known as MITRE, which still operates it) in which most of the data in the NVD originates. CVE.org will put up a blog post within two weeks detailing what’s going on, but my last post should give you some idea of that (not that I’m privy to all of the details of course).

Even better, you can join the SBOM Forum’s Vulnerability Database working group, which will hold its first bi-weekly meeting on Tuesday April 30 at 11AM Eastern Time. We’ll be discussing (with at least one member of the CVE.org Board) what CVE.org offers today and what needs to be added, both to let it replace the functionality of the late, lamented NVD, but also to go beyond that. There are lots of things you can do if you’re not constrained by two decades of…stuff.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Friday, April 26, 2024

Maybe there’ll be a happy ending to the NVD story yet!

It seems almost normal that a French citizen would follow goings-on in the US government having to do with vulnerability management much better than US citizens – or at least the US citizen who writes this blog (and no, that US citizen’s name isn’t ChatGPT!). Since I connected with Jean-Baptiste Maillet (JB for short) on LinkedIn earlier this year, I’ve learned a lot of things from him about vulnerability management and the vagaries of CVE, CPE and other TLAs (three-letter acronyms).

Moreover, he has curious reading habits. Early this week, he put up a post on LinkedIn about the meeting minutes of the CVE.org board on April 3. He knew I’ve been speculating a lot recently that the CVE.org database (formerly called MITRE, and still operated by contractors from MITRE Corp.) would be a fairly easy substitute for the NVD. This is both because CVE.org is much more modern and fully redundant (neither of which adjectives applies to the NVD), and because it’s the original source of most of the data in the NVD.

Even given those facts, I’ve been cautious about predicting that CVE.org would replace the NVD as the US government’s go-to vulnerability database. I reasoned that, since the only “boss” over both the NVD and CVE is one Joseph Biden – and Mr. Biden seems to have more weighty issues on his mind nowadays than the travails of the software supply chain security industry – the likelihood that this switch would be made within, say, the next two years was quite low.

However, I was quite pleased by what JB reported from reading those minutes (which I’ve never even thought to read):

1. “The CVE Program will be reaching out to CNAs (the top 10 code-owning CNAs by number of publications) to make sure they are aware that they can submit enriched data (e.g., CPE, CWE, CVSS) directly to the CVE Program, rather than submitting it separately to the NVD.”

This is quite significant: The CVE Numbering Authorities (CNAs) create virtually all the CVE reports that go into CVE.org, the NVD and lots of other public and private databases worldwide. Until recently, the CNAs have been either not able or not allowed (depending on who you talk to) to create CPE names (CPE is the only way software can be identified in the NVD) and CVSS scores for their CVE reports.

NIST, which runs the NVD, for a long time discouraged the CNAs from creating CPEs. And if the CNA created a CVSS score, NIST would create their own score, almost always higher than what the CNA had created. What’s odd about this is that CNAs are often large software developers (Red Hat, Oracle, Microsoft, HPE, Schneider Electric, etc.) and most of the CVE reports they create are for their own products. Why should NIST not have allowed CNAs to name their own products, since I know some CNAs have complained that often the NIST people make mistakes in creating CPE names? Of course, this makes it difficult or impossible to find those products in the NVD (moreover, the developer gets blamed when that happens).

Of course, NIST can’t complain that CVE.org is abrogating their previous agreement allowing the NVD to create CPE names and CVSS scores, since the NVD for 9-10 weeks has pretty much flatlined when it comes to creating this data themselves.

2. CVE.org is going on a PR offensive (my term) to explain these changes to their constituents. Meanwhile, the NVD still hasn’t provided a word of explanation regarding what happened on the fateful day of February 12, when it seems a black hole opened up under the NIST headquarters in Gaithersburg, Maryland, from which the NVD has yet to emerge (not even in the form of Hawking Radiation!).

3. “…CNAs may not realize they can submit their data to the CVE Program via JSON 5.1 and then that data will roll into the NVD.” My ears (OK, my eyes) really perked up when I saw the magic number 5.1; I certainly hope that wasn’t a typo. For background, the CNAs submit CVE reports to CVE.org using the JSON data representation language, in a particular schema. That schema has changed through the years. The current version 5.0 schema was adopted by CVE.org more than two years ago and was just recently implemented by the NVD.

The 5.1 version is much improved, but has (or will have, anyway) one very important feature that the OWASP SBOM Forum requested two years ago: the capability to convey purl along with CPE identifiers. This will be a big deal, since purl is far superior to CPE as an identifier for open source software; see this paper written by the SBOM Forum in 2022.

However, this doesn’t mean you’ll be able to look up vulnerabilities using purl in CVE.org soon. First, the CNAs will have to be trained on creating purls, and even when the CNAs start adding purls to the CVE reports, each vulnerability database will need to support searches on purls (CVE.org will almost certainly support purl searches much earlier than the NVD does – assuming the NVD even adopts the 5.1 spec). However, at least we know purls will be coming.

To summarize, I’m quite pleased that CVE.org seems to be moving ahead to at least fill the gap caused by the NVD’s still-unexplained work slowdown (although “stoppage” might be an even better description of it). Maybe we won’t need to request that President Biden drop everything else he’s doing and negotiate a treaty between the Department of Commerce (which operates NIST and the NVD) and DHS (which operates CVE.org and CISA). In fact, maybe we’ll have a fully-functioning (and improved!) free government-operated vulnerability database within say 3-6 months, without requiring any extraordinary actions by either Department.

And this reminds me: The first meeting of the OWASP SBOM Forum’s Vulnerability Database Working group will be next Tuesday (April 30) at 11AM Eastern Time (which we hope will be workable for people on the West Coast, in Europe, and even in Israel – who mostly haven’t been able to attend the regular Friday SBOM Forum meetings, since Friday is the beginning of their weekend). We already have a diverse group signed up for the meetings, which will be held biweekly. If you are interested in joining the group (and being able to suggest improvements in documents we create, even if you can’t attend the meetings), drop me an email.

Here is my tentative agenda for the group, but the group will be able to suggest changes to that. I know one of our first topics will be what improvements need to be made to CVE.org for it to become the US government-operated vulnerability database (as opposed to being an Alternate Data Provider to the NVD, which is its current official role. Instead, the NVD would become an ADP to CVE.org). For example, I don’t believe there’s currently the capability to do one-off queries to CVE.org; this is certainly important for raising understanding of vulnerability management among the general public.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Tuesday, April 23, 2024

NERC CIP: My podcast on CIP and the cloud


Industrial Defender recently contacted me about doing a second podcast (the first was a couple of years ago) on a NERC CIP topic of my choosing. I jumped at the chance, since I consider the fact that NERC entities with medium and high impact CIP environments are in essence “forbidden” to utilize the cloud for some of their most important reliability and security workloads to be the biggest NERC CIP-related problem facing the power industry today.

Moreover, I have heard from multiple knowledgeable people in the NERC Regions that this problem is rapidly getting worse and that, if nothing is done about it in the next 2-3 years, there will likely be negative impacts to the security and reliability of the Bulk Electric System – due to the increasing number of software and services vendors that have announced they will soon only support cloud users.

The podcast was just posted, including a complete (and accurate) transcription of my conversation with Ray Lapena of ID. I’d like to hear any comments you have about his podcast.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Friday, April 19, 2024

Would you like to help figure out the best path(s) forward on vulnerability databases?

Many organizations in the software supply chain security community have assumed for years that the National Vulnerability Database (NVD), despite its various problems, is the de facto international standard for vulnerability databases. They also believe it can be relied on going forward to be the “bread and butter” database that meets most of the needs for most of the organizations involved with the community. However, the seeming inability of the NVD to fulfill that role since mid-February 2024, and the fact that there hasn’t even been an attempt to explain what the problem is, have made it clear this is no longer a good assumption.

In the wake of this event, there are three questions that need to be answered. The Vulnerability Database Project of the OWASP SBOM Forum proposes to develop answers to these three questions, based on discussions that will be open to all parties concerned with software supply chain security in general and vulnerability management in particular. If you would like to participate in weekly discussions and help create a document addressing the first two of these questions, and/or if your organization can support this effort through a donation to OWASP (a 501(c)(3) non-profit corporation), please drop me an email at tom@tomalrich.com.

The first question is, “What options are available for NVD users, both to replace services they have been counting on from the NVD and to go beyond what the NVD has traditionally offered?” There are many other vulnerability databases available, both free and for charge. These provide one or more services that the NVD has provided, but also go beyond the NVD in various ways. Questions include:

1.        What are these other databases?

2.        How do their offerings map to what the NVD has been offering?

3.        What are their offerings that go beyond the NVD?

4.        In what ways do they differ from the NVD, for example in vulnerability identifiers supported (CVE, OSV, GitHub Security Advisories, etc.), software identifiers supported (CPE, purl, or other), and types of products supported (open source software projects, proprietary products, intelligent devices – as well as sub-categories of these)?

5.        Given that most of these alternative databases do not cover the entire range of what the NVD covers, how can NVD users “mix and match” the different offerings so that, depending on their individual needs, they end up with at least the same level of functionality they previously received from the NVD and hopefully a lot more? And without ending up with a hopeless mishmash of incompatible vulnerability data?

6.        Given that the CVE.org database is the original source of most of the data in the NVD and that its infrastructure is much more robust and modern than the NVD’s, how hard would it be for current NVD users to switch over to using CVE.org as their primary vulnerability database – as one major NVD user has recently done? What could be added to CVE.org to facilitate this switch, such as a more end-user-friendly front end? What would be the advantages of using CVE.org over the NVD, including much-sooner support for purl and the fact that the originators of all CVE data – the 300+ CVE Numbering Authorities (CNAs) – are part of CVE.org? 

The second question is, “What steps should the US government take with respect to this problem? These might include:

1.        Doing nothing and hoping the NVD has a miraculous recovery.

2.        Actively investing in the NVD’s infrastructure, which will probably require a complete rebuild from scratch.

3.        In place of the current situation, in which the NVD is the primary vulnerability database and CVE.org is just an “alternate data provider (ADP)” to the NVD, turn this situation around so that CVE.org is the primary database and the NVD is an ADP.

4.        Get out of the vulnerability database business and leave that to the private sector, while maintaining CVE.org as by far the leading provider of vulnerability data – including investing heavily in the CNAs, given their irreplaceable role in the vulnerability identification process worldwide. 

The third question is, “What is the best long-term solution to the vulnerability database problem worldwide?" While there can be many views, Tom believes the following are “self-evident truths” (with apologies to Thomas Jefferson):

1.        Requiring a single uber vulnerability database (“One Database to Rule Them All”) that will somehow gather, harmonize and synchronize data from all other databases is a concept whose time has come and gone. There are many vulnerability databases operated in different ways by different organizations. Let them all continue to operate as they always have. Instead, there needs to be an AI-powered central “switching hub”, which might be called the Global Vulnerability Database (GVD). Queries to the GVD could use any major type of software and vulnerability identifier; the hub would route each query to the most appropriate database or databases and route the response(s) back to the end user. It would also harmonize the responses when needed.

2.        Of course, the GVD needs to be a truly global effort. It cannot be under the control of any single government or private sector organization, although all governments and organizations will be welcome to contribute to it (Tom believes that raising funds to create and maintain this “database” won’t be hard at all, given that nobody but US taxpayers is currently allowed to contribute to the NVD. It isn’t at all surprising that the NVD is chronically underfunded, despite being used worldwide).

3.        Developing the GVD will require a nonprofit organization to manage the process. When (and if) the GVD is running smoothly, operation of the database might be turned over to an organization like the Internet Assigned Numbers Authority (IANA), which manages IP addresses and DNS. Otherwise, the nonprofit organization would continue to operate the GVD in perpetuity.

The SBOM Forum Vulnerability Database Project will initially focus on the first two questions. The group will collaborate on one or more documents to answer these questions. Rather than wait until the documents are complete, the group will publish a current draft every two months, to maintain interest in the project among the software security community and to invite feedback on the work so far.

When the first two questions are answered and the results have been published, the group can start work on the third question. Since the end result of that effort might be a workable design for the GVD, that effort could easily take multiple years.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


Wednesday, April 17, 2024

Everything you always wanted to know about VEX (and TEA), but were afraid to ask


Two weeks ago, Steve Springett (leader of the OWASP CycloneDX and Dependency Track projects, and recently elected OWASP board member) and I recorded a podcast with Deb Radcliff, whose podcasts are widely followed in the software development community and are sponsored by CodeSecure. The podcast is called “VEXing SBOMs”, and you can find it here. Briefly, here are the main topics that we covered:

1.      We discussed use cases for SBOM and VEX.

2.      Steve discussed how SBOMs have become a natural part of the build pipeline.

3.      I pointed out that IMHO the number one reason why SBOMs are not being distributed to and used by software end users (i.e., the 99.9% - or so - of public and private organizations worldwide whose primary business is not software development) is the fact that there are currently no strict specifications for VEX on the two original VEX “platforms”: Common Security Advisory Framework (CSAF) and CycloneDX.

4.      I also noted that Anthony Harrison of the OWASP SBOM Forum has recently remedied that problem. This is a key step toward the goal that the SBOM Forum hopes to achieve before the end of 2024: starting a proof of concept in which end users benefit from the “full stack” of software component vulnerability management, namely utilization of SBOM and VEX to allow end users to learn about exploitable component vulnerabilities in their software, and ultimately to be able to quickly answer the question, “Where on our network are we vulnerable to (insert name of “celebrity vulnerability” du jour)?” You can read more about the proof of concept in Part 3 of my book (see below).

5.      Steve described the OWASP Transparency Exchange API project, which is described in this draft document. In my opinion, this will be the key enabler of distribution and use of SBOMs and VEX documents.

Thanks for inviting us, Deb!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

Monday, April 15, 2024

Two months and counting

I’ve written a number of posts lately on the problems with the National Vulnerability Database (NVD); this one was the first. Briefly speaking, around the middle of February, the NVD greatly slowed the rate at which it incorporated new CVEs into the database (CVEs originate in the CVE.org database, which is run by the Department of Homeland Security. The NVD is run by NIST, which is part of the Department of Commerce).

In addition, the small number of new CVEs that have appeared in the NVD since mid-February don’t have CPE names with them (CPE is the only software identifier supported by the NVD). A CVE report without a CPE name on it is about as useful as a car without a steering wheel, since the whole point of a CVE report is to identify the product(s) that is affected by the vulnerability (i.e., the CVE). While CPE has a fixed specification and CPE names could in theory be generated automatically, the NIST staff members that run the NVD feel compelled to create each CPE name manually.

However, it seems they’re not doing that very well, either. See the graph below, which was created last week by Patrick Garrity of VulnCheck. The X axis labels are very small, but each day of 2024 is a datapoint. On February 12, the “(CVEs) Analyzed” line (in green) flatlined. It has remained at an almost constant value since then, meaning almost no new CVEs have been analyzed in two months; since the NVD staff members only create a CPE name to go with a CVE when they “analyze” the CVE, this means that virtually no useful CVE reports (i.e., reports that link a CVE with one or more CPE names) have been added to the NVD since February 12.


                               

.

Of course, this has not been due to a lack of new CVE reports coming from CVE.org. The red “(CVEs) Awaiting Analysis” line has steadily climbed since February 12. In other words, since February 12, new CVEs have appeared at their normal pace, but almost no new CVE reports have been analyzed by the NVD staff, meaning they still do not have CPE names.

What happened to cause this problem? NIST has put up about four or five notices since late February, the latest of which is this one. It has no explanation, of course, even though that’s been promised a couple of times. However, sometimes actions (or non-actions, in this case) speak much louder than words. Here is what I think NIST is really telling us:

1.      We still don’t fully understand what happened on Feb. 12. However, it wasn’t any sudden increase in new CVEs to analyze, any sudden decrease in staff, any sudden loss of funding, etc. The NVD has always been understaffed and underfunded, and new CVEs have increased most years.

2.      No matter what the cause of the problem (other than a direct nuclear strike), we would have been up and running within minutes of the event – if our infrastructure weren’t two decades old. Any important modern database is fully redundant, but we have always had single points of failure. Clearly one or more of these failed.

3.      Ironically, all of the data in the NVD is also in CVE.org, which utilizes a modern, fully-redundant database infrastructure. Why don’t we switch all queries to CVE.org, you ask? We refer you to Tom’s earlier statement: CVE.org is part of DHS, while we are part of the Department of Commerce. Maybe the two Secretaries will meet to work this out. And maybe Israel will sit down and have a good talk with Iran. But don’t count on either of these happening anytime soon.

4.      We would like to tell you that we’re working on the problem, but how can we do that, since we still don’t understand it? Instead, we’re going to tell you about an idea we discussed with the OWASP SBOM Forum a year ago, but never followed up on: a “consortium” of private companies that will help us fix our problems. That will take 9-12 months at a minimum to put into place, and even theno, it’s not clear what this group could do to fix our ancient infrastructure. But we have to point to something that we’re going to do, rather than just say we’ll continue to run from crisis to crisis. But that’s the most likely outcome.

5.      Have a nice day!

To sum up, we’re two months into the NVD’s problem, and we still don’t have even a partial explanation of the problem, let alone a full one. And we definitely don’t have a solution!

What’s the next step, both for your organization and the US government? The next step is to figure out what the options are for the next step. The OWASP SBOM Forum is assembling a group to do exactly that, and expects the group to start meeting soon. Let me know if you’d like to participate in that, by contributing your time, your organization’s money, or both (participation does not require a contribution).

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

Thursday, April 11, 2024

It’s time to figure out this whole vulnerability database problem

Tom’s note: I sent out the notice below to the members of the OWASP SBOM Forum and it’s generated a lot of interest. It seems people agree with me (there’s a first for everything, I suppose!) that there are just too many threads to be pulled for one person, or just a small group of people, to figure out the best strategy (or strategies, more likely) for moving forward on this issue. If you’re interested in this, please let me know.

OWASP Foundation Vulnerability Database Project

Since mid-February, the amount of usable vulnerability data added to the National Vulnerability Database (NVD) has significantly declined compared to its previous average levels. This occurred without prior warning and has not yet been explained. While the level of production seems to be gradually increasing, NIST (which operates the NVD) has not estimated when it will return to normal levels.

Moreover, NIST has not announced any measures to prevent whatever caused the problem from recurring, other than describing their desire to form a “Consortium” of private sector organizations willing to help NIST fix the NVD’s problems. Since the Consortium will take at a minimum 6-9 months to put in place (and someone with more experience with government than I told me this week that, given what has to be done, three years might be a better estimate), and since it is unclear how the Consortium might be able to assist the NVD when it is in place, the Consortium is unlikely to have much of an impact on the NVD’s problems in the short or intermediate terms.

While everyone in the software security community hopes the NVD will be able to fix its problems, it is evident the community cannot count on the NVD being a dependable source of new vulnerability data going forward - although there is no danger that the data currently in the NVD will ever become unavailable. It is time to explore all options for providing dependable and comprehensive vulnerability data to users in the US and worldwide in the short, intermediate, and long terms.   

Fortunately, today there are multiple good vulnerability database options available in both the private and public sectors. These include CVE.org, the database operated by the Department of Homeland Security. This is the source of all CVE data (used in the NVD and other databases), and is based on a modern, fully redundant infrastructure, unlike the NVD’s – although it currently lacks the user interface of the NVD. At least one large software developer has switched to using CVE.org for most of the data it previously retrieved from the NVD.

The options also include multiple privately-run databases of open source software vulnerabilities, as well as databases that include all NVD data, along with enhancements not found in the NVD.

However, the wealth of vulnerability database options also poses a challenge, since it is hard to compare the databases. This is because they address different types of software (and devices), use different identifiers for that software, list different identifiers for vulnerabilities, are at different levels of maturity, have different relationships with data sources, etc.

Even more importantly, the characteristics of the different databases are not set in stone, and some are more adaptable than others. For example, both the NVD and CVE.org databases currently identify all software and intelligent devices using “CPE names”. In 2022, the OWASP SBOM Forum described the many problems with CPE names and the superiority of purl (Product URL) identifiers for open source software, in this white paper. CVE.org currently accepts (on a trial basis) the “CVE JSON 5.1 specification”. That spec (thanks to a pull request submitted in early 2022 by Tony Turner of the SBOM Forum) makes it possible for CVE.org to utilize purl identifiers, once those are added to CVE reports by the CVE Numbering Authorities (CNAs). However, it will be at least 2-3 years before the NVD supports the 5.1 spec, and thus will be able to accept purl.  

The OWASP SBOM Forum believes it is important now to examine the near- and immediate-term vulnerability database options that are available, both to end user organizations (anywhere in the world) and to government agencies. There are two main reasons for doing this:

1.      Users of software vulnerability data need to determine what are their best options for obtaining the current vulnerability data they need, and the advantages and disadvantages of each option. Some users may decide to utilize multiple vulnerability databases, not just one, and thus will want to know what each option provides them.

2.      The US government needs to decide their best options for allocating their investments for vulnerability reporting - if there even will be more investment. Investing heavily in a database with an out-of-date infrastructure would not be a good idea, assuming there are more up-to-date options.

Rather than simply having a small number of people write a white paper, the SBOM Forum wishes to establish a working group that is open to all interested parties in any country. It can include end user organizations, software developers, operators of public and private vulnerability databases, individuals who work for government agencies, vendors of vulnerability management tools, and more. The group will probably hold bi-weekly meetings in the morning US time, to allow as much European participation as possible. However, since the document(s) will be developed cooperatively, anyone will be able to participate in drafting them, regardless of their time zone.

Because OWASP SBOM Forum members will need to devote a significant amount of time to this project, they will need to receive some compensation. Since the OWASP Foundation is a 501(c)(3) nonprofit corporation (a type of nonprofit to which donations are often tax deductible), and since donations to the OWASP Foundation that are over $1,000 can be directed to an OWASP project such as the SBOM Forum, we are requesting that organizations or individuals that are concerned about having a robust software vulnerability management ecosystem contribute to this effort.[i]

You are free to donate any amount you would like, or not to donate at all. Any donation of $5,000 or more will be acknowledged with your logo on our website, assuming you would like to do that. Note that participation in this project does not require any donation.

If we receive sufficient donations and there is interest, the SBOM Forum will extend the project to consider the longer term. In this extension, the question will change from “What are the options in the near and intermediate terms?” to “What is the optimal global vulnerability database structure long term?”

It is close to certain that the optimal long term vulnerability database option is an international one, funded by both public and private organizations but not operated by a single government or for-profit organization. One model (or even a final home) for that option might be the Internet Assigned Numbers Authority (IANA), which operates DNS and performs other functions that support the global internet.

IANA (now part of ICANN) was originally operated by the National Technology and Information Administration (NTIA) of the US Department of Commerce, but is now internationally governed. The global vulnerability database would need to be “incubated” initially by a consortium of private- and public-sector organizations, just as DNS was incubated by the NTIA.

Should the long term project move forward, it is likely the group will consider the idea of not having a single database at all. Instead, there could simply be a federation of existing vulnerability databases, linked by an AI-powered “switching hub”. That hub would route user inquiries to the appropriate database or databases and return the results to the user. Using this approach would of course eliminate the substantial costs required to integrate multiple databases into one, and tot maintain that structure. It would also probably eliminate any need to “choose” between different vulnerability identifiers (e.g., CVE vs. OSV vs. GHSA, etc.) or different software identifiers (CPE vs. purl).

We hope your organization will decide to participate in this important project and will also consider donating to it. Please contact Tom Alrich at tom@tomalrich.com with any questions.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] Donations by credit card can be made online and directed to the OWASP SBOM Forum, by going to our OWASP site. While the process is straightforward, we request that you email Tom Alrich before donating. For non-credit card donations, please email Tom. Note that OWASP retains 10% of the donation for administrative purposes (although any tax deduction will apply to the entire donation). Given the amount of work that SBOM Forum members would have to do if we were running our own nonprofit organization, we consider this to be quite acceptable.

Wednesday, April 10, 2024

Are you sure this is “critical” infrastructure?

 

My friend Mike Barlow put up a great post on LinkedIn this week, which points out a huge irony regarding critical infrastructure (including most devices that run power substations, gas pipelines, oil refineries, etc.): While CISA and others are constantly advocating for use of “memory safe” programming languages for new software and firmware, most legacy devices (whether or not they’re for critical infrastructure) operate on definitely-non-memory-safe languages like C and C++.

Mike summarizes this situation quite succinctly: "…your exercise app is probably more secure than the code running at your local electric power station." Does that make you feel safe?

What’s there to be done about this? I dunno. Replacing all that equipment will be tremendously expensive, although obviously any replacement efforts should start with the most critical equipment. Perhaps baby monitors can be left ‘til the end, although I imagine that, being much newer than for example some electronic relays deployed in power substations, the baby monitors have much safer code than the relays.

This is a good example of “technical debt”. We – and probably the rest of the world, except countries with much newer infrastructure, perhaps due to having just come through a war – have a lot of such debt to pay. Of course, I doubt there’s a line anywhere in the federal budget about paying technical debt. As often happens, we’ll wait ‘til things start breaking down. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Saturday, April 6, 2024

NERC CIP: Is there a shortcut to the cloud?


As I pointed out in this post in January (and have many times in previous years), NERC entities with medium and/or high impact BES Cyber Systems, Electronic Access Control or Monitoring Systems (EACMS), and Physical Access Control Systems (PACS) can’t currently make full use of the cloud, unless they want to risk violating a number of CIP requirements literally every day of the year.

As more and more software and service providers (including security service providers) announce their software or services will only be delivered in the cloud in 1-2 years, there is real concern (including among NERC and Regional Entity staff members) that there could soon be significant impacts on both grid reliability and grid security. One major ISO has said they will need to lower their security rating in two years, due to their security service providers ceasing to offer an on-premises option.

While there will soon be a process underway to make all the changes to the CIP standards (and the NERC Rules of Procedure) that are needed to make use of the cloud fully “legal”, that process will take probably six years, and maybe longer than that. That process needs to continue, but it clearly won’t finish before the reliability and security impacts begin to be felt.

Why do we have this problem? It isn’t because the language of the current CIP requirements prohibits use of the cloud. Those requirements say nothing at all about the cloud. This is because they were originally drafted starting in 2008, when use of the cloud was considered quite risky by most NERC entities (and certainly by FERC). Of course, even today it’s doubtful that many organizations of any type think of the cloud as risk-free. However, the huge number of successful attacks against on-premises systems shows on-prem isn’t risk-free, either.

Instead, the reason why we say the current CIP standards prevent use of the cloud for medium and high impact systems is that the cloud service provider would never be able to provide the evidence the NERC entity needs to prove compliance with CIP-005 R1, CIP-007 R2, CIP-010 R1 and other current requirements. As everyone who is involved with NERC compliance knows, “If you don’t have evidence that you did it, you didn’t do it.”

Why is this the case? I’m going to let you in on a dirty little secret about CIP: There are more “implicit requirements” than “explicit requirements”. Explicit requirements are the ones listed in the standards, while implicit requirements are unwritten, but implied by explicit requirements. In other words, while a NERC entity can’t be cited for violating an implicit requirement, often performing an implicit requirement is a prerequisite for complying with an explicit requirement. If you don’t “comply” with the implicit requirement, you’ll be in violation of the explicit one.

One of the most important implicit requirements in CIP has to do with the fact that BES Cyber Systems (BCS) are defined simply as collections of one or more BES Cyber Assets (BCA). BCAs are physical devices you can point to, while BCS are just a function performed cooperatively by multiple devices. You can’t point to a BCS.

The problem arises because none of the CIP requirements today mentions BCAs, only BCS. For some of the requirements, like the ones for training, policies, etc., this is fine. However, think about CIP-007 R2 for patching. It applies to BCS, but do you apply a patch to a system or to a device? Of course, you patch the device. So, complying with CIP-007 R2 in fact requires that you first comply with the implicit requirement that says something like, “Everything required by each of the parts of CIP-007 R2 needs to be repeated for every device included in the BCS.”

Now, think of what the CSP would have to do to provide evidence of compliance with just CIP-007-6 R2.2, if you put a medium or high impact BCS in the cloud today. Every 35 days, they would need to:

1.      Inventory all the physical devices on which any portion of that BCS has resided during the last 35 days (of course, systems in the cloud are always spread across many servers and data centers, and they are moved around all the time. That’s how the cloud works). Each of those physical devices needs to be included in the inventory, even if just a small part of the BCS resided on it for just a few seconds.

2.      Inventory every piece of software that is installed on any of those devices (no matter which cloud customer the software belongs to) and inquire with the developer of the software whether they have released any patches in the past 35 days.

3.      For each of those patches, evaluate it for applicability to the system it’s part of (which will be impossible, since those systems are unlikely to be ones that your organization has anything to do with. They may be owned by entities in completely different industries, foreign nationals, etc.).

And remember, the CSP will need to perform all these actions and document that they did so, despite the fact that their normal patching procedures (or perhaps those of their customers, since most CSPs follow a “shared responsibility” model) may have already patched all of the software products identified in step 2. For prescriptive CIP requirements like CIP-007 R2, CIP-005 R1 and CIP-010 R1, the only evidence of compliance is evidence that the exact steps mandated by the requirement (and by any implicit requirements, as described above) were followed.

Do I need to go on? I didn’t think so. Remember, these three steps (which might need to be performed thousands or even tens of thousands of times) are just for one part of one requirement. Think of what the CSP would need to do, to provide evidence to a NERC entity to prove compliance with every part of every requirement for say 200 BCS for a three-year audit period! The CSP would have to provide many millions of pieces of evidence to the entity.

Thus, if you signed a contract to put just one BCS in the cloud and then got on a call with the CSP to explain what they need to do to gather the evidence you need, you would probably lose them as soon as you described the first step: I strongly doubt any CSP maintains a log of every device on which a piece of a single BCS might have resided over one week, let alone three years. The CSP would apologize for not properly understanding what you needed when you first signed the contract with them, but they would have to tell you that neither they nor any other CSP could ever provide you with compliance evidence for even a single BCS for one week. There’s simply no way they could do that, without completely breaking their business model.

However, recently it occurred to me that perhaps CIP-013-2 might offer a way to include cloud services within the scope of compliance, even for NERC entities with medium or high impact systems. Consider this:

1.      A NERC entity decides to implement a single medium impact BCS in the cloud. They do this by signing a contract with the CSP.

2.      CIP-013-2 R1.1 says the scope of R1 is “the procurement of BES Cyber Systems and their associated EACMS and PACS to identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services.” (my emphasis) In signing the contract, the entity isn’t procuring any hardware or software products from the CSP, but they’re certainly procuring services.

3.      Therefore, the relationship between the entity and the CSP falls into the scope of CIP-013. The entity should treat the CSP the same as they would treat any other service provider in scope for CIP-013.

Because the cloud service is one of the products and services that needs to be addressed in the NERC entity’s supply chain cybersecurity risk management plan, the entity will need to include it as one of the procured items in the plan. Just as they must do for all other procured products and services in scope, the entity needs to describe in the plan how they will “identify and assess cyber security risk(s) to the Bulk Electric System” arising from the cloud service.

What are these risks? At a minimum, they need to include the six risks described in R1.2.1 through R1.2.6.[i] But of course, there are lots of other risks that apply to cloud service providers. Rather than leave it up to each NERC entity to decide what those risks are, NERC will need to provide a list of types of cloud risks that must be addressed in the plan.

Since the CSPs will never permit every NERC entity to audit them, NERC could do an “audit” themselves. An important component of the audit might be reviewing the CSP’s ISO 27001 certification documentation to determine whether it meets a certain set of criteria established in advance. It would be up to the entity to decide whether to accept NERC’s audit in whole, in part, or not at all. Of course, there might be other steps in the CIP-013 cloud compliance process as well.

By following CIP-013, the NERC entity no longer needs evidence that the CSP has complied with requirements like CIP-007 R2, any more than they need evidence that other vendors addressed in CIP-013 (e.g. the vendor of relays used in substations) have complied with CIP-007 R2. However, the utility will need to provide evidence of the vendor’s compliance with CIP-004-7 Requirement Parts R3.4, R6.1, R6.2, and R6.3 (and perhaps one or two other Requirement Parts in CIP-004-7). Some CSPs may balk at providing this documentation, but given that the alternative (being required to provide reams of documentation that is literally impossible to produce) is much worse, they will hopefully agree this isn’t such a terrible fate.

Where’s the catch? For one thing, I’ll admit I’ve taken some liberties with the term “services”. There’s not much doubt that the CIP-013 drafting team never intended “services” to include cloud services – but they never defined the term, so there’s no way to know what they intended (moreover, the Rules of Procedure don’t provide any mechanism for considering the drafting team’s intentions as part of a CIP audit). I’ll also admit that the NERC “audit” of the CSPs, which I described above, would require changes to the Rules of Procedure, or perhaps some sort of temporary waiver. There will need to be some sort of intervention by someone at NERC (most likely the Board of Trustees) to smooth the path for this change.

But it’s important to keep the big picture in mind: Like it or not, within two or three years, if no change is made before then, the choice will be between accepting a lower level of security and (perhaps) reliability for the power grid and allowing NERC entities with high and/or medium impact environments to utilize cloud-based software and services - while being in technical violation of a host of CIP requirements. Moreover, the latter option will be rushed into place due to a grid emergency, as opposed to having plenty of time now to carefully plan and implement the CIP-013 option.

You pays your money and you takes your choice.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] These six items are included in the standard because FERC mentioned each of them at various places (and in varying contexts) in Order 829 in 2016. They’re not there because the Standards Drafting Team (or FERC itself) considered them to be the most serious supply chain cybersecurity risks. The Responsible Entity is still required to examine risks and determine for themselves which ones are important enough to mitigate and which ones are not.