Friday, May 31, 2024

The NVD continues to underwhelm


Yesterday, the NVD put up the latest episode of their ongoing soap opera, “As the NVD declines”, in the form of this announcement on their website:

May 29, 2024: NIST has awarded a contract for additional processing support for incoming Common Vulnerabilities and Exposures (CVEs) for inclusion in the National Vulnerability Database. We are confident that this additional support will allow us to return to the processing rates we maintained prior to February 2024 within the next few months.

In addition, a backlog of unprocessed CVEs has developed since February. NIST is working with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) to facilitate the addition of these unprocessed CVEs to the NVD. We anticipate that that this backlog will be cleared by the end of the fiscal year. 

As we shared earlier, NIST is also working on ways to address the increasing volume of vulnerabilities through technology and process updates. Our goal is to build a program that is sustainable for the long term and to support the automation of vulnerability management, security measurement and compliance.

With a 25-year history of providing this database of vulnerabilities to users around the world and given that we do not play an enforcement or oversight role, NIST is uniquely suited to manage the NVD. NIST is fully committed to maintaining and modernizing this important national resource that is vital to building and maintaining trust in information technology and fostering innovation. 

Moving forward, we will keep the community informed of our progress toward normal operational levels and our future modernization plans.

This announcement was loudly trumpeted by an article in Cybersecurity Dive today. The headline made me open the article, where I was immediately disappointed by the first sentence: “The National Institute of Standards and Technology expects to clear the towering backlog of unanalyzed vulnerabilities in the National Vulnerability Database by the end of September, the agency said in a Wednesday update.”

Why was this disappointing? To understand why, you need to understand the two most important activities performed by the NIST NVD staff:

1.      Importing CVE reports produced by CVE.org and integrating them into the NVD database.

2.      “Analyzing” the reports, which primarily consists of a) creating and adding a CVSS score (if not already present), b) adding CWEs, and c) adding CPE names. CPE names are by far the most important of those items, since without them, the CVE report is the rough equivalent of a car without a steering wheel: You know there’s a new vulnerability out there, but you have no idea what product(s) is vulnerable to it, unless you read the text of the report. However, text isn’t enough. The CPE name of a vulnerable product needs to be in the report, since without it, nothing will appear in the NVD to link the vulnerability to the product.

However, the NVD didn’t lie when they said in their announcement, “NIST has awarded a contract for additional processing support for incoming Common Vulnerabilities and Exposures (CVEs) for inclusion in the National Vulnerability Database”, right before they said, “In addition, a backlog of unprocessed CVEs has developed since February.” The first quote refers to item 1 above: the “additional processing support” in the new contract is to help the NVD ingest CVEs into the NVD. The second quote refers to the enrichment of those CVEs. That’s the “additional backlog” that they haven’t even thought about addressing yet, let alone found the funds to reduce it (of course, reducing that backlog will require a lot more hours of effort, although it will not be technically difficult). CISA is trying to reduce it themselves, but they’re only doing so for a small percentage of the backlogged CVEs.

This is more than passing strange. After all, the NVD has been processing CVE reports since the early years of this century. Since the processing doesn’t add anything to the report beyond what the CNAs (who work on behalf of CVE.org) have already included, and since by now, parsing the reports and populating the appropriate NVD fields should be performed as soon as the report is received, why does the NVD even have a backlog of CVEs to process, let alone need to pay $865,000 to a contractor to lower the backlog?

I’m sure the reason is the same as the one that probably explains the collapse of the enrichment function during one day (Feb. 12) in February: the fact that the NVD’s hardware and software infrastructure was created two decades ago. Presumably, whoever developed them has long ago departed the NVD (and perhaps this world), perhaps not leaving behind what would be considered top-notch documentation today.

Contrary to what the article says, CISA’s funding cutback doesn’t explain the sudden collapse of the database on February 12. Nor does the fact that they received a larger-than-normal number of new CVE reports then. No modern database should choke and be down for 3 ½ months (and counting) due to a sudden increase in workload. In fact, no modern database should be down for even a day due to any technical problem, let alone 3 ½ months. But we’ve known for a long time that there are multiple single points of failure in the NVD’s infrastructure.

By the way, has anyone heard the NVD’s explanation for what happened in February, or even an apology?...I didn’t think so, since they still haven’t provided one (note they didn’t do that in their announcement yesterday, either). This must mean one of two things, neither of which is good:

1.      They still haven’t figured out what happened; or

2.      They know what happened, but don’t think their worldwide users deserve to hear that.

I’m not sure which is worse.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Wednesday, May 29, 2024

The Road to Cloud CIP, part 2: the CIP-013/ISO 27001 solution


In my first post in this series, I pointed out what I think is an essential element of new CIP standards that will accommodate cloud use for NERC entities with medium and/or high impact environments: having separate “tracks” for cloud vs. on-premises systems (with the requirements for the latter being almost exactly the same as the current CIP requirements).

However, as I pointed out in this post, the full NERC process of developing the new standards and getting them approved by NERC and FERC – which started three weeks ago - will likely take at least five years. This isn’t good, since more and more software and services vendors (and especially security services vendors) are moving to a cloud-only model, even though they know some of their electric utility customers won’t be able to follow them there. It is likely that the security and reliability of the North American Bulk Electric System (BES) will soon be negatively impacted because of this situation.

In this more recent post, I pointed out three things. First, I noted that the big problem inhibiting cloud use for CIP systems is the fact that the CIP requirements are based on the idea of protecting the physical device (e.g. a server) on which a system in scope is housed – yet, the software that comprises those systems moves rapidly from server to server all the time. Strict application of the CIP requirements (which were mostly developed before cloud use became widespread, and which never mention the cloud once) mandates first that every cloud server that holds even a small piece of a system in scope for CIP - say, an EACMS - comply with all the CIP requirements that apply to that asset type, no matter how inapplicable they may be to the cloud.

Second, those requirements also implicitly mandate that every server that might in the future hold part of one of those systems be continually protected during an entire three-year audit cycle. Given that there are over 40 applicable CIP requirements and over 100 applicable requirement parts (sub-requirements), that would require an unbelievably huge amount of work for the CSP (applicable systems are medium or high impact BES Cyber Systems or BCS, Electronic Access Control or Monitoring Systems or EACMS, and Physical Access Control Systems or PACS). Moreover, the CSP would need to provide literal mountains (well, hills) of documentation, individualized for each NERC entity that is a customer.

Finally, I noted that CIP-013-2, the supply chain security standard, may furnish a way out of this mess in the near term, while still allowing the full 5+-year process of drafting new CIP standards (and probably changes to the NERC Rules of Procedure) to proceed in the background.

However, it turns out I made a mistake in both statements. In the first statement (really, a group of statements), I made the implicit assumption that a cloud service provider (CSP) would never “break the cloud model” by installing systems in scope for NERC CIP on dedicated, unchanging servers, even though the requirements were all written for such servers. But I’ve since learned that this might indeed be an option for CSPs. If BCS, EACMS and PACS never jump from physical device to physical device, CIP compliance suddenly becomes much easier.

In the second statement (about CIP-013), I followed the same implicit assumption – that systems in scope for CIP compliance would continually jump from server to server in the cloud, meaning there would be an unbelievably large number of servers in scope for CIP-013 compliance. However, if the servers are unchanging (and locked within a single Physical Security Perimeter subject to the requirements of CIP-006), this suddenly makes what I proposed in the post – that the NERC entity declare the CSP to be a vendor under CIP-013-2, and that the method for assessing their cybersecurity be examining their ISO 27001 certification evidence – much more realistic.

In other words, I now like my suggestion about CIP-013 even more than I did in April. It seems there could be a fairly easy way for usage of the cloud by medium and high impact BCS, EACMS and PACS to be “legal” relatively soon - maybe in two years, vs. the 5+ years that the full standards development process will take. Of course, the full process still needs to take its course, since it is very likely that the new Standards Drafting Team will want to see more evidence from the CSP than just their ISO 27001 certification, and since changes will be needed in CIP-002 to enable the “two-track” CIP compliance process I described in the first post in this series.

What will it take for this to happen? I think the NERC Board of Trustees will need to take some action to get this all moving. It would be nice to be able to say there is currently motion in that direction. However, I don't see that now. 

I do want to point out that the NERC Cloud Technical Advisory Group (CTAG) and SANS are going to sponsor a set of 6 or 7 webinars on use of the cloud by NERC entities, starting at the end of June and running biweekly through the end of August. That may raise some awareness.

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for the next few years, as well as beyond that? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Saturday, May 25, 2024

NERC CIP: The road to “Cloud CIP”, part 1


Last December, the NERC Standards Committee approved a Standards Authorization Request (SAR) that set in motion the process of making revisions to the NERC CIP Standards (and perhaps the NERC Rules of Procedure as well) that will finally allow NERC entities with high and/or medium impact BES environments to make full use of cloud services for those environments.

However, when I say “set in motion” I’m using that phrase loosely, since the committee assigned the project medium priority - meaning it would not even start until the third quarter of this year. I pointed out in this post that, because of all the cats that need to be herded for this project to succeed, it will probably take 5-6 years (at least) between the day the project starts and the day the barriers to full use of the cloud by NERC entities are finally removed.

I also pointed out there is growing concern among NERC and Regional Entity staff members about the steadily increasing numbers of software and service providers who are telling their NERC entity customers they no longer have the option of providing a totally on-premises solution. Those NERC entities will soon face the choice (or have already faced it) between doing without those software products and services, and being in violation of a slew of CIP requirements if they don’t move away from them.

These staff members fear that within 2-3 years there may be real damage to the reliability - and especially the security - of the Bulk Electric System. This is because some important NERC entities will no longer be able to utilize software and services (especially security services) that they depend upon today to keep the lights on. I speculated there might need to be some sort of “break glass” measure that would allow at least some NERC entities to utilize the cloud for high and medium impact BES environments, while still allowing the standards development process to proceed at its accustomed geologic pace (in fact, I suggested one such measure, which I think is still an option that needs to be discussed. In fact, it will not break much glass at all).

However, it seems the Standards Committee has been hearing about these problems from other sources as well, since last week there was an unexpected announcement that Project 2023-09: Risk Management for Third-Party Cloud Services has been set up and is now soliciting comments on the SAR. Of course, there’s a huge journey ahead, but it’s nice to see that the first step is being taken earlier than originally planned.

In October, I was invited to present for a monthly webinar (called a Tech talk) presented by the RF NERC Region; I chose as my topic the question of how I would rewrite the NERC CIP standards to “pave the road” to full use of the cloud by NERC entities. Lew Folkerth of RF – a good friend who has made regular appearances in this blog for almost ten years – interviewed me for the webinar.

As the basis for the webinar, I put together a lengthy article describing in some detail the changes I would make; I published it in this post (I also published a PDF of the article, which I’ll be glad to provide to anyone who emails me for it).

Of course, now that the standards drafting process is finally starting, it’s now more important than ever to get ideas on the table for what the new standards should look like. The ideas in my article haven’t changed hugely since I wrote it, but I would like to make them more accessible now by discussing them in a set of short posts; this is the first of those posts. Since I’m sure my ideas will evolve as the new Standards Drafting Team (SDT) meets and starts having substantive discussions, this might be a series that goes on for years.

Something like this is needed since, unlike almost every other NERC CIP standards drafting process since the CIP v1 drafting team started meeting in 2006, this process is not driven by a FERC order. Even though FERC staff members understand that the changes I’m hereby naming “Cloud CIP” are sorely needed, and even though they are providing assistance when they can, the drafting team doesn’t have an official FERC “blueprint” to follow. Instead, it is up to the drafting team to figure out what it wants to be when it grows up (the team hasn’t been constituted yet. If you work for a NERC entity, you might consider getting nominated to the team. Having been an active observer of several previous standards drafting efforts, I can promise it will take a lot of your time, but I can also promise it will probably be one of the most interesting efforts you’ve ever participated in).

I certainly can’t say I know exactly what is needed to solve the problem of CIP in the cloud, but at least the posts I write will help clarify people’s ideas. It’s almost impossible to get very far if you start with a completely blank slate, which is essentially what the drafting team has been presented with (the SAR rightfully doesn’t try to prescribe what the team needs to do). It’s better to start from what later proves to be a dead end position, than to start from no position at all.

My first topic in this series is an idea that I definitely didn’t originate, but which I now realize is probably the key to a successful Cloud CIP effort. This is an idea that the CIP Modifications drafting team learned the hard way in 2018. I hope to describe that bit of history in another post soon, but to summarize, that drafting team proposed a thoroughgoing change to CIP that in retrospect was exactly what’s needed to fix the cloud problem (it was actually intended to be a framework for integrating virtualization support into the CIP standards). However, the SDT’s proposal was going to require that every NERC entity throw away most of their existing CIP program (including documentation, training, software, etc.) and start with a brand new one.

The new CIP program that the SDT outlined (which I discussed in this and two subsequent posts) would have rewritten many of the CIP requirements so they were all risk-based. It was certainly the right overall approach, but a lot of big utilities, who had millions of dollars invested in their existing CIP programs and neither the budget nor the inclination to throw all of that away and start over, made it clear they would never do that. The drafting team realized they’d been beaten and dropped the whole idea.

I had been a big supporter of the drafting team’s ideas in 2018, but after they went down in flames, I decided there’s no fighting City Hall; I stopped advocating for those changes. About once a year, I put out a post stating that I saw no prospect for the cloud becoming completely “legal” for NERC entities until the NERC community had a change of heart and decided that the long term benefit of having CIP  requirements that would allow full use of the cloud was worth the short-term hassle of having to throw away their existing processes and start over.

However, early last year a new SAR was developed that was quite short on details but threw in one new concept which turned out to be the key to making Cloud CIP a real possibility. This SAR (which developed into the one that was adopted in December) raised the idea of two CIP “forks” for two different groups.

One group is the set of NERC entities (which might even be the majority, although I have no way to know if that’s the case now) that is perfectly fine with the existing CIP standards, and more importantly doesn’t want to make a radical change to what they’re doing now. They don’t particularly care about making full use of something they don’t think they need anyway: use of the cloud by medium and high impact BES Cyber Systems, Electronic Access Control or Monitoring Systems (EACMS), etc. The other group is NERC entities that are painfully aware of how much not being able to make full use of the cloud is hurting both their organization’s bottom line and increasingly their levels of reliability and security, as their most important vendors start to tell them they are moving to the cloud – and by the way, will they join them there?

For the first group, the solution is simple: They can keep doing exactly what they’re doing now. The CIP requirements they comply with won’t change at all, except for changes already proceeding that have nothing to do with the cloud. For the second group, the CIP changes will be big (including completely risk-based requirements), but only for systems they wish to “outsource” to the cloud – either by use of SaaS offerings or by actually transferring existing on-premises systems to the cloud. For their on-premises systems, there will be no change at all in the CIP requirements.

Does this two-track system sound like a big mess to you? I thought that might be the case, but when I looked at how it could be accomplished, I realized that in principle it’s not that hard. The principal changes required are a) defining new types of assets with “Cloud” in the name (e.g., “Cloud BES Cyber System”) and b) making some surprisingly minor changes to wording in CIP-002 R1 and Attachment 1. Almost no changes are required in the other CIP standards, since they will henceforth just apply to on-premises systems (i.e., what they apply to now). The requirements that apply to cloud systems will be found in new CIP standards that apply only to cloud-based systems.[i]

There’s a reason why the changes to the existing CIP standards to accommodate the two-track Cloud CIP system turn out to be so easy to describe. That’s a subject for one of the next posts in this series. I’m giving you something to look forward to.

Are you a vendor of cloud-based services or software (or services or software you would like to be cloud-based, were it not for problems like those discussed above), that would like to figure out an appropriate strategy for the next few years, as well as beyond that? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] NERC entities that choose to put systems in the cloud under Cloud CIP will still need to follow the “classic” CIP standards for their on-premises systems.

Sunday, May 19, 2024

Clarifying the Global Vulnerability Database

Brian Martin recently put up a long, thoughtful post on LinkedIn that critiqued my post from last November about what I called (and still call) the Global Vulnerability Database (GVD). That post was one of many I’ve written that started by being about one thing (in this case, a GVD that would be a single database) but evolved as I was writing the post into something else by the end (something that isn’t a single database at all, but more of an “intelligent switching hub” for evaluating vulnerability database queries and routing them among the most appropriate existing databases).

This evolution didn’t bother me, since one of the advantages of calling something you write a blog post, rather than an essay or a white paper, is that you don’t have to go back and rewrite the whole post when such an evolution occurs – people expect inconsistency (and I deliver it, I’m proud to say). I figured that it wasn’t worth rewriting the post unless it drew a lot of interest, so I let it stand as I’d written it.

It didn’t draw much interest until Brian wrote his post a couple of weeks ago. Brian has a lot of experience with vulnerability databases and has a good following for his thoughts, so his post drew a lot of comments.

When I read Brian’s post, I realized that a number of his objections wouldn’t have been valid if I’d rewritten the post so that it focused on the single idea of a switching hub, not an actual database – although I presume I won’t be thrown in jail if users think of it as a single database. The point is that it should be possible in 2024 to field diverse queries – regarding different types of products (open source software, proprietary or “closed source” software, and intelligent devices), different types of vulnerabilities (CVE, OSV, GitHub Security Advisories, etc.), and different identifiers (CPE and purl) - and have an intelligent engine that decides, for each query, which is the best database or combination of databases to resolve the query.

Once that decision has been made, the appropriate queries will go out to the different individual databases (CVE.org, the NVD if it still exists, OSV, OSS Index, VulnCheck, VulnDB, etc.). Then, the results will be processed and returned to the user as an answer to their question. There would need to be a lot of intelligence behind both of these steps, since they won’t be easy at all (and they will require quite a lot of prior knowledge, such as whether a report in OSS Index that a particular software product – identified with a purl – is affected by a CVE has the same status as a report in CVE.org that the same purl is affected by the same CVE, since they will have been derived very differently).

To rectify my sin of last November in not rewriting my post before I put it up, I put up a new post on May 9. This made a single, coherent statement, but still doesn’t include all the detail (such as what’s in the paragraphs above) that I would include if I had the time to write a white paper. I think this answers many of Brian’s questions (such as whether the GVD would be hugely expensive and require legions of volunteers. That would be the case if we tried to put up a single “harmonized” database that maintains data from all existing vulnerability databases, and that’s the reason why last November I switched in mid-post to the idea of a switching hub).

However, Brian brought up one important issue that I want to address now (he brought up others, which I hope were mostly addressed in my May 9 post. If there are other issues that you still want me to address, Brian, please let me know).

Is CPE a dead end?

Brian repeated a sentence from my November post, “Specifically, a new identifier is needed for proprietary software, since I (and others) regard CPE as a dead end, even though it was pioneering in its time.” Brian ended up basically agreeing with that statement, but his reasoning isn’t mine, and I’d like to describe that.

Pages 4-6 of the OWASP SBOM Forum’s 2022 paper on how to fix the naming problem in the NVD (which is still valid today, even though the NVD now seems to be on its way to extinction or worse: irrelevance) describe some serious problems with CPE. However, they don’t address what I consider to be the most serious problem: the fact that there will never be a way to populate fields like vendor and product name in a way that will be universally agreed on, without resort to more databases – which themselves will need to be constructed, maintained, etc.

For example, a CPE listing “Microsoft” as the vendor will be different from one listing “Microsoft, Inc”, which will be different from one listing “Microsoft, Inc.” with a period, etc. Because a CPE won’t be easily found unless it matches the CPE that’s searched for, trying to search for a particular product or vendor always involves guessing about the choices that were made by the person (usually an NVD staff member, until recently) who created the CPE. The NVD may have some sort of criteria they follow (e.g., “Always put a comma before ‘Inc’ and a period after it.”), but they’re clearly just rough rules of thumb if they exist at all, since CPE names vary for seemingly random reasons.

Because, as Brian points out, the CNAs will probably be creating most CPE names from now on and the CNA is often the developer of the product being named, this in theory is better. Yet, how is an end user that wants to know whether the product I’m using to write this post is called “Microsoft Word”, “Word”, “Word 365”, “Microsoft Office Word”, etc. going to find which one of those Microsoft (which is a CNA) uses? Even worse, the product name might vary by the division within Microsoft that creates the CPE, etc.

You might say something like, “What does Microsoft call the product on their web site?” And I ask, which of the Microsoft web sites are you referring to? Is Microsoft going to enforce standard naming across all web sites worldwide? And what about blog posts on the Microsoft sites? Will they follow some sort of internal Microsoft standard? Etc., etc.

What some people, including some who should know better, have suggested is that there should be a centralized database of product names, company names, version strings (since versions can be identified in many ways), etc. Then “all you have to do” to find the correct CPE is look up the company name, product name, and version string (which also varies a lot) in the directory. The company will hopefully rigorously enforce use of their chosen names, and the CNAs will be severely disciplined if they use any other in naming their products in a CVE report…And while we’re at it, the lion will lie down with the lamb and I will study war no more and people will stop having loud cell phone conversations on trains; that is, all the world’s problems will be solved…

By the way, who will pay for that inordinately expensive database of product and company names? It will cost a huge amount of money, both to put together and to maintain – much more than the cost of the NVD and CVE.org databases combined. Face it: an identifier that requires an expensive auxiliary database to make it work is a dead end. Even if all the other problems with CPE didn’t exist, this alone would ultimately sink it.

This is why the OWASP SBOM Forum recommended purl as the replacement for CPE in our 2022 paper. The paper goes to inordinate lengths to explain why purl is better, but the main reason is that no lookup is required. As long as you know the package manager (or source repository) that you downloaded an open source component from, as well as the name and version string in that package manager, you can create a purl that will always let you locate the exact component in a vulnerability database. This is why purl has literally won the battle to be the number one software identifier in vulnerability databases worldwide, and literally the only alternative to CPE.

Currently, there are no purls in CVE.org. However, the fact that CVE now supports purl in CVE Format 5.1 (formerly “CVE JSON spec v5.1) – a change requested by the SBOM Forum two years ago – means there will be purls when the CNAs start adding them to their CVE reports (which unfortunately will probably not be soon, given the substantial training that will need to be conducted.

However, there is one big fly in the purl ointment: It currently doesn’t support proprietary (or “closed source”) software. Our 2022 paper did suggest a solution for that problem (proposed by Steve Springett, who is a purl maintainer, among many other things): There should be a new purl type called SWID, which will be based on the contents of a SWID tag created by the supplier. Anybody with the SWID tag for the product they want to inquire about (and for at least a few years, some big software suppliers like Microsoft included a SWID tag with the binaries for all of their products) will be able to create exactly the same purl that the supplier used to report the vulnerability. In fact, Steve got the SWID type added to purl.

What’s preventing this from being the solution for naming proprietary software is that there’s no good way for an end user, who might not have access to the binaries of a product they’re using – or who is using a legacy product that doesn’t have a SWID tag – to find the tag, if there is one.

I think this is a solvable problem, but it will depend – as a lot of worthwhile practices do – on a lot of people taking a little time every day to solve a problem for everybody. In this case, software suppliers will need to create a SWID tag for every product and version that they produce or that they still support. They might put all of these in a file called SWID.txt at a well-known location on their web site. An API in a user tool, when prompted with the name and version number of the product (which the user presumably has), would go to the site and download the SWID tag – then create the purl based on the contents (there are only about four fields needed for the purl, not the 80 or so in the original SWID spec).

There can be other solutions like this as well, and they don’t even have to be based on SWID tags (as long as they’re based on purl). The point is that we should no longer have to rely on a software identifier like CPE, that requires a separate database (or databases) to work. Of course, since there are so many CVE reports that have only CPEs on them (in fact, I think they all do today), it will be years (if not decades) before we can finally be done with CPE. But we should try to move to purls as soon as possible, so we can at least stop the bleeding.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

Monday, May 13, 2024

How do we replace the NVD? Should we replace it?

At the OWASP SBOM Forum’s weekly meeting on Friday, we heard what the NVD’s big problem is: It no longer has funding. It turned out that the collapse on February 12 really was due to a funding shortfall – a massive shortfall, although perhaps not all the way to zero. Some people had suggested that the problem was cutbacks by NIST. I I pooh-poohed that idea, since budgets are set before the beginning of the government’s fiscal year, which is October 1.

However, it turns out I didn’t know where the lion’s share of the NVD’s funding comes from. And guess what? It doesn’t come from NIST. It was that other source that abruptly cut off the NVD’s funding in February. At that point, the NVD probably had to release some staff members to join other projects at NIST (as I’m sure you know, NIST does lots of work in cybersecurity. There probably are always openings on other projects. I doubt anyone from the NVD was put on the street, unless they decided to quit). The NVD kept “enriching” a small number of CVEs, but at the meeting on Friday, someone (I think Andrey Lukashenkov of Vulners) said they hadn’t done any enriching for ten days.

This means they (i.e., whoever is still working for the NVD) are now just trying to maintain the database they have. This means that, if you search the NVD for vulnerabilities applicable to a product you use, don’t expect to find any recent vulnerabilities. Of course, those are the ones that most users are concerned about. The NVD without data on recent vulnerabilities is about as useful as a car without a steering wheel.

Why did that other source cut off their funding? I don’t know, and finding out isn’t a big priority for me. What is a priority is deciding where the software security community goes from here, as well as what options are available to organizations concerned about identifying vulnerabilities in the software they use.

As luck would have it, I’d realized a couple of months ago, after the NVD’s problems became apparent, that there were too many individual threads that needed to be pulled for there to be succinct answers to these questions; therefore, there needed to be a group effort to pull the threads and put together a coherent picture. This is why I formed the Vulnerability Database Working Group of the OWASP SBOM Forum. I summarized my idea of the group’s agenda in this post. I still think it’s a good agenda, although the group will decide what we want to discuss and what document(s) we want to produce – and those ideas are likely to change as we go along.

However, there has been one significant change since I wrote that post. Then, it was unclear what (if anything) CVE.org would do to step up to replace the NVD. It’s now clear that they are doing a lot. There have been two important changes:

The first change is that CVE Numbering Authorities (aka CNAs. They include groups like GitHub and the Linux kernel, as well as software developers like Red Hat and Microsoft, who report vulnerabilities in their own products as well as other products that are in their scope. The CNAs report to CVE.org) will now be encouraged to include in their CVE reports the CPE name for the product, the CVSS score and the CWEs. As the announcement by CVE.org points out, “This means the authoritative source (within their CNA scope) of vulnerability information — those closest to the products themselves — can accurately report enriched data to CVE directly and contribute more substantially to the vulnerability management process.” It never made sense for the NVD to create these items – or override what the CNA had created, which often happened.

The other significant change is that CVE.org now supports what was previously called v5.1 of the CVE JSON specification, but is now called the “CVE Record Format v5.1”. In early 2022, Steve Springett and Tony Turner of the OWASP SBOM Forum submitted a pull request to CVE.org to get purl identifiers included in the CVE spec. We missed the v5.0 spec, but made it into v5.1. This means that, with some training, CNAs will now be able to include purls in their CVE reports, although they may still have to include CPEs, and software users (as well as developers) will bee able to find vulnerabilities for open source products using purls (this will also be a boon to open source software vulnerability databases like OSS Index and OSV).

Our reasons for advocating purl were described in this white paper, which we published in September 2022 (see especially pages 4-6, where we describe some of the many problems with CPE). In that paper, we also described a way to handle proprietary software in purl (since purl currently applies exclusively to open source software), based on use of SWID tags – but the mechanism for making the SWID tags known to software users was left open. This isn’t a technical problem but an organizational one, since it might require creating a spec for software developers to follow when they want to advertise SWID tags for their products. It would be nice to see that done sometime, but I don’t have the time to lead it now.

The third topic in that paper was an identifier for intelligent devices, since CPE is deficient in that area as well. We suggested the GTIN and GMN identifiers, now used widely in global trade. I didn’t think we were even going to push those identifiers at the time, yet it seems that Steve and Tony must have included them in the pull request, because CVE says they’re supported in 5.1 as well! If CNAs start including these in CVE reports for devices, that might significantly change the whole “discipline” of vulnerability management for devices, which is frankly not making much progress now. This is because only a very small percentage of device makers report vulnerabilities in their devices to CVE.org today.

I’m encouraged that CVE.org is picking up their game. However, an organization that currently uses the NVD exclusively for vulnerability data would not be well advised to simply switch their total dependence over to CVE.org, since the “front end” capabilities needed for those organizations to make use of the data are less robust in CVE than they are (or were) in the NVD.

We will be discussing these and similar issues in the OWASP Vulnerability Database Working Group in our meetings tomorrow and every two weeks thereafter. If you would like to join the group, drop me an email.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Saturday, May 11, 2024

NERC CIP and the cloud: The Regions clearly state the problem

I and other consultants have been pointing out the serious problems being caused by the fact that the current CIP standards effectively prohibit use of the cloud for medium and high impact systems. This is not because the requirements explicitly forbid cloud use – indeed, the current requirements say nothing at all about the cloud - but because cloud service providers would never be able to provide the evidence required for a NERC entity to prove compliance with prescriptive CIP requirements like CIP-007 R2 and CIP-010 R1.

Now, two highly respected staff members from the Regional Entities, Lew Folkerth of RF and Chris Holmquest of SERC, have written an excellent document that states the problem very clearly. It was published in the newsletters of both Regions: SERC (https://serc1.org/docs/default-source/outreach/communications/serc-transmission-newsletter/2024-newsletters/2024-05-newsletter/n-the-emerging-risk-of-not-using-cloud-services_2024-05.pdf?sfvrsn=e6923606_2 ) and RF (https://www.rfirst.org/resource-center/the-emerging-risk-of-not-using-cloud-services/ ).

I am reproducing the article below with permission from both Regions. I have highlighted two paragraphs I think are especially important. The one problem I have with this document is that it doesn’t emphasize the most serious issue here: It will likely take at a minimum 5-6 years for the new standard(s) to be developed and approved, along with changes to the NERC Rules of Procedure that will probably be required, and for all of this to take effect.

Yet, given the accelerating pace at which software and service providers are moving to the cloud, it is likely there will be negative impacts to grid security and reliability within a couple of years. There will probably need to be a shortcut available soon that will allow some NERC entities to continue using software and services (especially cybersecurity services) they would otherwise have to stop using.  We don’t want the NERC CIP standards, which were implemented to improve grid reliability and security (and have done so, to be sure) to become the biggest impediment to that improvement.

If you have questions for either Region, you can get them answered here: SERC (https://www.serc1.org/contact-us) and RF (https://www.rfirst.org/contact-us/). Also note that the project page for the new NERC “Risk Management for Third Party Cloud Services” standards development project – which will begin operations in Q3 – has been created. You should bookmark this and check it regularly.  

 

THE EMERGING RISK OF NOT USING CLOUD SERVICES

By: Chris Holmquest, Senior Reliability and Security Advisor, SERC Lew Folkerth, Principal Reliability Consultant, RF

In the ERO, we are seeing forces that foretell an inevitable move to cloud-based services for many operational technology (OT) applications and services. Cloud technology has been advancing for many years, and software and service vendors are now migrating their products to take advantage of this new technology. Even when our industry addresses the security concerns of this migration, there will still be compliance concerns. We will share the efforts underway to identify the risks to reliability, security, and compliance that our industry must address before we can move forward in this area.

Security challenges for on-premises OT systems

Vendors of security monitoring, asset management, work management, and other essential services are moving toward cloud-based services at a very rapid pace with their applications and infrastructure. This brings a new risk to light: soon we may be seeing end-of-life notices for on-premises systems, which translates to lessened or non-existing support, including security patches. Some members of our industry have already observed that new and important features are being implemented only in the cloud-based offerings.

Entities are looking at the potential benefits that cloud-based software and services can bring. As entities in our industry are challenged to acquire sufficient resources to manage their reliability, security, and compliance risks, cloud services can offer attractive solutions to manage these risks while lowering costs in capital investment and support.

Moving to the cloud presents risks as well, not the least of which is being confident that your systems and data are secure. Even when you are confident in the security of your systems and data, you will still face compliance risks. 

Compliance challenges for OT cloud services

The use of cloud services will not be possible for high and medium impact BES Cyber Systems under the present CIP Standards because compliance risk will be increased beyond an acceptable level, except for BES Cyber System Information in the cloud. New Reliability Standards will be required, and those standards will need to be risk-based. There are too many variables in cloud environments to be able to write prescriptive standards for these cases.

Your compliance processes will need to be very mature and integrated with operational processes and procedures. Internal controls will become even more important.

Auditing processes will need to be adapted to cloud environments to determine the type, quality and quantity of evidence that will be needed to provide reasonable assurance of compliance. 

The path forward

There are efforts underway to help with this complex dilemma. We are looking at these various issues and have formed an ad-hoc team of Electric Reliability Organization and Federal Energy Regulatory Commission staff, cloud service provider vendors, industry consultants, training experts, and electric industry security, compliance, and management personnel. This team is providing ad-hoc support to other existing groups working to advance the use of cloud technologies. So far, these efforts include work on a series of industry webinars to address issues with using cloud in our OT and CIP environments. Awareness of cloud technologies for our systems is crucially important, and these webinars will be designed for a broad audience. Efforts also include a field test of a cloud-based system and investigating third-party assessments, which may be essential to accommodate the CIP Standards with a cloud system.

There is a formal NERC subcommittee under the Reliability and Security Technical Committee called the Security Integration and Technology Enablement Subcommittee (SITES). Registered entity staff and vendors are members of this group, and they have published a white paper called “BES Operations in the Cloud” that we recommend.

A SITES sub-team, New Technology Enablement (NTE), is in the process of creating a series of white papers to help move the standards development effort from a stance that follows technology developments after the fact, to a leading process where standards development is part of early adoption of applicable technologies. The goal of NTE is to enable use of the best available tools and techniques in our most critical systems. Their first effort will be a paper titled “New Technology Enablement and Field Testing.” 

Getting involved

The ability to use cloud services to reduce security risk and to improve reliability and resilience is important to the future of our industry.

We suggest that you read the SITES white paper and consider volunteering to participate in the SITES and/or NTE groups if you would like to contribute.

SANS, the well-known security training organization, will be hosting the series of webinars mentioned above. Please watch for the announcements for these webinars. Also, there is a recorded SANS Summit Panel discussion (link below) of this risk and possible directions forward.

A new standards development project, Risk Management for Third-Party Cloud Services, has been established (see link below). This project is scheduled to become active in the third quarter of 2024.

Please stay abreast of these developments and consider how your knowledge and industry experience can contribute to these efforts. 

References

• Security Integration and Technology Enablement Subcommittee (SITES)

• White paper: BES Operations in the Cloud

• SANS Summit Panel – We Hear You Cloud and Clear

• 2023-09 Project – Risk Management for Third-Party Cloud Services

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Thursday, May 9, 2024

Is it time to seriously discuss the Global Vulnerability Database?


Last August, I wrote a post called “A Global Vulnerability Database”. In the post, I wrote about a disappointing experience the SBOM Forum (now the OWASP SBOM Forum) had recently had with Tanya Brewer of NIST, the leader of the NVD. We had reached out to her in early 2023 and offered to help the NVD, especially regarding implementing the recommendations in the white paper we published in 2022 on how to fix the NVD’s problems with software naming.

We first met with Tanya in April 2023. She said then that she’d like to put together a “public/private partnership” so that we and one or two private corporations – who had also offered to help them – would have a structured way to do that. When she talked to us again in June, she had worked out with NIST’s lawyers the idea for a “Consortium” which organizations could join to help the NVD – although it seemed the main help she wanted was for companies to provide her warm bodies, which were attached to minds that knew something about coding.

Those bodies would need to be ensconced at the NVD for at least six months. The first couple of months would be spent learning some obscure programming language I’d never heard of (which says something, since I wrote my first couple of programs on punch cards). Offhand, suggesting that young, ambitious coders spend 6 months immersed in working with a decades-old language, probably used nowhere but the NVD, didn’t seem to me to be a wise career suggestion. However, we all appreciated her enthusiasm.

She described in detail her plans to announce the Consortium in June, have it published in the Federal Register in August, and get it up and running by the end of the year. Those plans seemed very unrealistic but again, we appreciated her spirit.

However, that spirit seemed to dissipate quickly; in fact, the next we heard about the Consortium was in NIST’s announcements about the NVD in late February, when NIST pointed to the Consortium as the cavalry that was at that minute galloping through the sagebrush on their way to rescue the NVD from its problems. However, it seems the cavalry lost its way, because – despite Tanya’s promise at VulnCon in late March that the Consortium would be announced in the Federal Register imminently – nothing more has appeared on that front (I think the cavalry got enticed by Las Vegas when they galloped through it. They haven’t been heard from since).

By the time I wrote the post last August, I was already disappointed that we hadn’t heard back at all from Tanya, and I began to wonder if just maybe the NVD wasn’t quite the pillar of the cybersecurity community that we believed it was.

This was why I started thinking more about an idea we’d talked about a few times in the SBOM Forum: the need for a vulnerability database that wouldn’t be subject to the vagaries of government funding, in an era when a bill to name a post office after George Washington might prove too controversial to get through Congress and the US has to decide every couple of years whether it wants to pay its debts at all.

In our discussions, we had agreed:

1.      This should be designed as a global database from the start. Governments and private sector organizations worldwide would be welcome to contribute to it, but we didn’t want the database to be dependent on one primary source of funding, especially a government one. Been there, done that, got the T-shirt.

2.      The database should follow the naming conventions we suggested in our 2022 white paper, or something close to them. Of course, there are still a lot of details to be filled out regarding those suggestions.

3.      Of course, putting all this data in one database, and maintaining it over time, would require a huge effort and a lot of support (including financial support). However, given the fact that so many organizations worldwide have been using the NVD for free for many years, and had an overall good experience, without once being asked to contribute a dime, I was – and remain – sure that the support will be there when we ask for it. My guess is there’s a lot of pent-up “contributors’ demand” that needs to be satisfied.

4.      The database would need to be developed and implemented by a separate non-profit corporation, but once it was up and running smoothly, it could be turned over to an international organization like IANA, which assigns IP addresses and runs DNS (how many hundreds or thousands of times do you use IANA’s services every day, without once thinking about who enables all of that?).

While I didn’t write about the GVD again until November of last year, I kept thinking about it. One thing I realized was that the idea of a “harmonized” database – with a single type of identifier for software and devices, as well as a single identifier for vulnerabilities (many people don’t realize there is any other vulnerability identifier besides CVE, but there are many of them, albeit most not widely used) was not only very difficult to implement but also completely misses the point: There are diverse identifiers because their subject matter (in this case, software products and intelligent devices, as well as vulnerabilities) is diverse.

A good example of this is purl vs. CPE as an identifier for open source software (currently, purl only works with OSS, although it has literally taken over the OSS world). A single OSS product and version may be available in different package managers or code repositories, sometimes with slight differences in the underlying code. The purl for each package manager and repository will be different from the others (since the mandatory purl “Type” field is based on the package manager). Yet, there will be only one CPE in the NVD and it won’t refer to the package manager, since there’s no CPE field for that.

Thus, there can never be a one-to-one “mapping” of CPEs to purls (or vice versa), meaning harmonization of OSS identifiers in a single database would be impossible. Similar considerations make it impossible to have a single identifier for vulnerabilities, with all other identifiers mapped to it.

But I also knew that harmonization of identifiers – while probably being an absolute requirement of database technology in say the 1970s – was hardly necessary today, since the last time I checked it was 2024. Today, a single database can very easily have multiple identifiers for single products, without breaking a sweat.

But that led to another idea: Why do we need a single database at all? After all, since the different identifiers are often used in specialized databases dedicated to particular types of vulnerabilities or software, and since these databases are often compiled by people with specialized knowledge of the contents, it would be a shame to try to homogenize those data types and especially those people. Instead of the richness and diversity available today, we would get a bland product produced by bland people, in which a lot of the detail had been flushed away in the name of harmonization.

However, even though we do have a rich variety of vulnerability databases available today, it’s hard for most people to understand how they can be used together (the SBOM Forum’s Vulnerability Database Project will be cataloging them and providing suggestions on how they can be used together, since in the near term there will be no single database available with the scope of the NVD; we may start discussing the GVD in a few months. If you’re interested in joining that group – which meets bi-weekly on Tuesdays at 11AM ET – drop me an email).

That’s why there needs to be a single intelligent front end that directs queries to the appropriate database(s). That front end will be called the GVD, but – and please keep this a close secret between you and me – it will in fact simply be like the Wizard of Oz: a single man standing behind the curtain, turning knobs and pulling levers to give the impression of a single, massively efficient database. Pretty slick, no?

So, I wrote this post in November. I’m pleased to say that it was met with massive, universal…indifference. Of course, that happens to a lot of my posts. I didn’t expect anything different this time, since there was nothing on the horizon that might make people think we needed to start thinking about a different long-term database solution…

…Until February 12, when the NVD seems to have been swallowed by a black hole – and three months later, we still don’t even have a coherent explanation for the problem, let alone the beginning of a fix. Early this week, Brian Martin posed a long, thoughtful analysis of my November post on LinkedIn. He raised some very good questions. I promised to answer them in a few days, but I then decided I’d like to put up a post that explains how I and other members of the OWASP SBOM Forum came up with the idea for the GVD.

Here’s that post. I’ll put up a new post with the answers to Brian’s questions by early next week. Please contain your excitement until then.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Monday, May 6, 2024

It seems I was wrong about medical devices

In January, I pointed out that at least eight of the top ten medical device manufacturers worldwide have either never reported a vulnerability to CVE.org or have only reported a small number of them. I said this not as direct criticism of medical device makers, but as illustration of the fact that intelligent device makers of all types are mostly not reporting vulnerabilities at all. After all, if medical device manufacturers – which are subject to the most stringent cybersecurity regulations worldwide – aren’t reporting vulnerabilities, what hope is there that manufacturers of baby monitors will do so?

Why is it important that the developer of the software or the manufacturer of a device report the vulnerability themselves? The reason is simple: A very high percentage of vulnerabilities are reported by the developer or manufacturer. If they don’t report it, in most cases nobody else will. This means that, when a user of the device wants to learn about vulnerabilities found in the device, when they search a vulnerability database they will never find any.

I want to note that nobody is saying that any software developer or device manufacturer should report a vulnerability for which a patch isn’t yet available, except in extraordinary cases like when a vulnerability is already being widely exploited. In those extraordinary cases, user organizations like hospitals should be able to learn that products they use have that vulnerability, so they can mitigate the threat using other measures (like removing the device from their network altogether), pending availability of the patch.

At least one major medical device maker (MDM) has told me they report vulnerabilities in their devices on their customer portal so their customers can learn about them, although they admit this isn’t a foolproof method to keep this information out of bad hands. They may correlate one of those vulnerabilities with a published CVE number, but if they don’t report the vulnerability to CVE.org, a search on a public vulnerability database will never yield the fact that their device is vulnerable to that CVE.

Of course, this means that nobody other than a customer of a medical device can learn of vulnerabilities in it, and nobody (whether a customer or otherwise) will be able to compare competing devices to learn whether they have the same vulnerabilities. But of course, this might be a good thing. After all, if none of your competitors are reporting vulnerabilities (and there’s no way in most databases to tell the difference between a device that’s never had a vulnerability and one that’s loaded with them, but has never reported a single one to CVE.org), who wants to stand out by reporting them?

At our most recent OWASP SBOM Forum meeting, we were discussing this problem, and I said I didn’t think there was a good excuse for the fact that the MDMs aren’t usually reporting vulnerabilities in their devices. At that point, the device security manager for one of the most prestigious hospital organizations in the US provided a very good reason why the MDMs don’t report them (and I’ll point out that I’ve known this individual through the NTIA and CISA SBOM efforts since 2020. In general, he doesn’t trust MDMs as far as he can throw them):

1.      Hospitals, like many other organizations (although probably more so), are seriously backlogged in applying security patches. This is partly because, unlike a lot of organizations, it is very hard to bring a device down when it’s time to apply a patch, since they’re often hooked up to patients – and nobody wants to see a technician disconnect Grandma’s infusion pump to apply a patch!).

2.      If the MDM follows the usual practice of reporting a vulnerability only after they have released a patch for it, it’s likely there will be a significant time lag between the vulnerability report and when most devices are protected by the patch.

3.      Of course, this would pose a serious risk to patients. And I'll point out that the same reasoning applies to electronic relays that control the power grid, devices in pipeline pumping stations, etc.

But another serious risk to patients is being hooked up to a device that carries vulnerabilities that have been there for a year or two, whose existence has almost certainly become known to the bad guys by now. There needs to be some deadline by which the hospitals will either have to patch the device or take another mitigation step, like removing the device from their network altogether. Maybe that would be six months for vulnerabilities that aren’t being actively exploited, but three months for vulnerabilities that are on the CISA KEV (Key Exploitable Vulnerabilities) list.

If the hospital can’t meet those deadlines, they’ll have to invest in enough extra devices, so the hospital doesn’t have a vulnerable device sitting on their network forever, without anyone outside of the manufacturer and the hospital knowing about it.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.