Sunday, October 30, 2022

Where’s them SBOMs, now that we really need them?

 

Practitioners of the black arts of software security are collectively sighing and saying, “Here we go again”, since the announcement[i] late last week by the OpenSSL Project that they’re going to release a new version of the almost ubiquitous OpenSSL cryptographic library on Tuesday. It patches a “critical” new vulnerability in that library, about which they won’t release any information.

Since there’s no patch available and the Project didn’t describe any temporary mitigation (presumably because even that might have provided enough information to the bad guys that they could have figured out how to exploit the vulnerability – although this is just my speculation), the question naturally arises, “What was the point of announcing it now? Was it so security professionals could quickly submit their resignations and move on to their new career in fast food delivery?”

No, the reason was that the OpenSSL Project knew that organizations that use OpenSSL will experience the same problem they had with earlier vulnerabilities like Heartbleed (the 2014 OpenSSL vulnerability that led to a massive effort to find and patch it – although there are probably still a lot of unpatched instances of the vulnerable versions out there), Apache Struts (which unfortunately Equifax didn’t get the memo on, resulting in the compromise of private information of more than ¾ of adults in the US. My information was also compromised), Ripple20, and of course the beloved log4j.

That problem is that OpenSSL is a component used in other software products, or in components of products, components of components of products, etc. – yea, unto the 20th generation. No organization has an invoice from a software supplier that identifies OpenSSL as something the organization bought and installed. How does the organization figure out how many instances of OpenSSL they’re running and where they’re running?

The answer is simple: They look at the most recent (hopefully quite recent) SBOM, which stands for “software bill of materials”, which they received from each software or intelligent device supplier whose product(s) they use. These will tell them what are the components (but probably only the first-level ones) in each product. So, while you think of it, go find the SBOM for every software product or device that your organization operates. This should only take you five minutes, right?

Of course, this was a joke. I doubt there are many organizations outside of the military and intelligence agencies, which have virtually unlimited budgets, who have an SBOM even for the majority of software products they operate – and if they do, they probably created the SBOMs themselves (or a consulting organization they hired created them). In fact, I’m sure there are many large organizations that don’t have an SBOM for any of the software they operate.

If you were around for the log4j effort at the end of last year, you’ll know that almost nobody had SBOMs then. But your probably heard assurances from a number of people (which could have included me, although I don’t recall giving such assurances) that by now SBOMs would be much more widely available than they were then. There was a specific reason why we said this: We knew that the SBOM provision in Executive Order 14028 was going to come into effect in August. It required federal agencies to start asking for an SBOM from each supplier of “critical software” by this past August 10.

Of course, the EO only applied to federal agencies. But we all supposed (or I did, anyway) that software suppliers weren’t going to start providing SBOMs to one large group of their customers (the agencies), but not to an even larger group (private sector organizations). Surely, we reasoned, this would be the beginning of a “beautiful friendship” (to quote Humphrey Bogart at the end of Casablanca) between software suppliers and end users – that is, an area of close cooperation between them, to address the common problem of software component security.

It turns out that we reasoned wrong. While I’m sure that at least some suppliers did provide an SBOM on request, I know of no supplier of software or intelligent devices anywhere on this planet (I’ll admit I haven’t checked with other planets) that’s regularly providing SBOMs to their users. It’s great that some of them provided one SBOM when asked.  However, SBOMs age like milk, not wine.

The official guidance is that an SBOM should be re-issued whenever the software changes, including a new patch, new build, etc. In practice, a software user should consider themselves quite lucky if one of their suppliers provides a new SBOM with every new major version release. Of course, if the only SBOM you have for a product is an old one, don’t throw it away; it’s still worth looking at. But before you take action (or don’t take action) based on something you find in an old SBOM (like OpenSSL), you’ll need to check with each of your software suppliers to find out whether OpenSSL 3.0 is present in their product and whether the vulnerability (to be announced on Tuesday) is exploitable in their product – since in over 90% of cases a vulnerability in a component isn’t exploitable in the product itself.

Why aren’t SBOMs being released? It’s not because suppliers don’t have them; they’re using them very heavily now to manage component vulnerabilities in the products they develop. See this post, which describes how just one open source tool, Dependency-Track, is being used 270 million times a month to look up around 27 billion open source software components in the OSS Index vulnerability database; almost all of that use is by software developers.

Rather, the main reason SBOMs aren’t being released is that end users aren’t asking for them. Why aren’t they doing that? There are a number of reasons, which I divide between “show stoppers” (problems that need to be addressed before SBOMs are widely distributed and used by non-developers), and others. Here are some of the show stoppers (I think there are around 6 or 7 in all, but I might be neglecting one or two. Note that all of these problems don’t need to be solved before SBOMs can start being used with regularity. However, users and suppliers will need to be confident that a solution is at least in the works):

1.      There are no tools or subscription-based third-party services that import SBOMs and VEX documents and output a list of exploitable component vulnerabilities, which is continually updated. If this is available, it means that, at any time, a user can obtain an up-to-date list of all known exploitable component vulnerabilities in a particular version of a product. Dependency-Track comes the closest to doing that, but even that tool doesn’t provide a continually-updated list.

2.      Even if the user has the above tool or service, they can never be sure that the list of exploitable vulnerabilities for a product/version they use doesn’t contain a number of CVEs that aren’t in fact exploitable in the product, which the supplier has already identified but which they haven’t yet communicated to customers. I have proposed “real-time VEX” as a solution to this problem. I’m happy to say that the idea will most likely be submitted to the IETF as a proposed standard – part of a larger submission – this year.

3.      When an SBOM is created, fewer than 10% (and maybe fewer than 5%) of components listed will be found – without a lot of manual effort – in the National Vulnerability Database (NVD); this is according to a few major software/device suppliers, but it is widely echoed throughout the software security industry. This is the problem that an informal group that I started, the SBOM Forum, addressed in this paper. That document is being considered by appropriate authorities now; we are quite optimistic that at least the main part of our proposal will be implemented in a year or a little more than that.

4.      There are many software products that have had multiple names over their lifetime, due to mergers, acquisitions, rebranding, etc. – or simply due to an open source product being made available in multiple package managers, with a different name in each. A search on one of those names won’t find CVEs that have been reported for any of the other names. The SBOM Forum has just decided to take this problem up next.

5.      There aren’t coherent “playbooks”, describing how a software user can utilize SBOMs and VEX notifications to manage risks due to software component vulnerabilities in the products they utilize. This makes users reluctant to forge ahead with SBOMs and it makes tool providers reluctant to develop SBOM/VEX consumption tools. This is because they can’t currently be sure that what a tool is required to do won’t change (this is more of a problem for VEXes than it is for SBOMs).

6.      None of the major (or minor, as far as I know) vulnerability, asset or configuration management tools can ingest SBOMs or VEX documents, in order to learn about exploitable component vulnerabilities in products installed on an organization’s networks. Once the first problem on this list is solved, it will be easy to solve this problem, since all of these tools presumably can ingest a list of CVEs linked to products.

As I stated above, these are far from the only problems holding up widespread distribution and use of SBOMs (I have a spreadsheet with about 40 problems including the above, but I won’t say that’s complete, either). However, these are the most serious problems, and I believe they will all have to be on their way to solution, before we can say we’re making real progress on the road to ubiquitous SBOMs. As you can see, 2-3 of them are already on the way to being solved (at least for the most part), and none of the others looks insoluble.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] My thanks to Kevin Perry, who was the first to call this to my attention.

Thursday, October 27, 2022

The White House gets into the labeling business

Last week, the White House held a workshop to discuss developing a program for securing IoT devices, scheduled to start in 2023 and to apply at first to two “particularly vulnerable” types of consumer devices: WiFi routers and home security cameras. What’s most important is how the device manufacturers will be motivated to comply with the program. Instead of threatening them with terrible consequences unless they meet certain cybersecurity requirements, suppliers will be “persuaded” to meet the requirements by the fact that, if they don’t do that, they won’t receive the device label that consumers will be trained to look for when they’re buying an IoT device.

In other words, the government will in effect warn the manufacturers, “If you don’t want to make any changes at all to your current security measures (or lack thereof), you’re free to follow that course. However, assuming we’re successful in making the public aware of the importance of looking for the cybersecurity label on any device they buy, you may find you don’t have as many customers as you might have been expecting.”

The news articles on the workshop made clear that the explicit model for this program is the Energy Star program, which uses a label to let consumers know which appliances meet certain energy efficiency standards. That program has been very successful.

Of course, a cybersecurity device labeling program is a fairly new idea. The most successful implementations of that idea so far have been in Singapore and Finland. Both of these are very small markets, so it’s not possible to draw real conclusions about what the program’s success means for the US. That being said, both programs have been successful, and Singapore’s has recently been extended to medical devices. Both programs require third-party testing in order to obtain the label.

Another country that has implemented a device labeling program is Germany. However, that label is an informational one. It indicates that the manufacturer attests they meet about five security requirements. If the manufacturer is willing to make these attestations, they will receive a label (the program is voluntary, so no manufacturer has to participate at all).

Admittedly, relying solely on attestations isn’t ideal, since the manufacturer could always lie. However, if the manufacturer suffers a breach and it becomes apparent that they had lied in one or more of the attestations, their label can be revoked. Since having attested falsely about their cybersecurity would undoubtedly reflect very negatively on the manufacturer, it’s reasonable to assume that most attestations will be truthful. If a manufacturer has terrible security, they won’t bother to apply for the label at all, but I strongly doubt they’ll lie to get it.

Note that this the meeting last week wasn’t the first time the White House has talked about an IoT device security labeling program. In Executive Order 14028 of May 2021, the WH ordered NIST to “…identify IoT cybersecurity criteria for a consumer labeling program…” within 270 days. Almost exactly on the 270th day, NIST published this document. It had two parts.

The first part was a set of “criteria” (i.e. requirements) for the device labeling program. In my review of a predecessor to that document in December (which remained mostly unchanged in the final version), I said I thought they were exactly what was required: NIST called them “outcomes-based” criteria and I would call them “risk-based”.

But the two terms are synonymous: The manufacturer is required to achieve a general outcome, but the exact steps by which the manufacturer does that are up to the manufacturer, and need to consider the level of risk posed by the device. That is, the steps a manufacturer needs to take for a security camera at a bank are more rigorous than what is required for a baby monitor, although the outcome might be considered the same. The post I just referenced discusses this idea in more depth.

The manufacturer also needs to consider the environment in which the device will be used. A nuclear power plant is inherently much riskier than somebody’s back porch, even though the same security camera might be used in both locations. Obviously, the security measures taken will be much more severe at the nuclear plant, even though the device being protected is exactly the same as the one on the back porch.

The second part of NIST’s February document discussed how the device labeling program would work. It listed three possible types of labels:

1.      Informational, which isn’t based on an assessment, but simply provides information on security measures taken for the device (e.g., the German label referenced above)

2.      Tiered, in which there are multiple levels at which the product can be evaluated. The level attained by the product is shown on the label.

3.      Binary, which is essentially a “pass/fail” designation. NIST indicated in the document that they preferred this label type. Energy Star provides a binary label.

In my December post I noted that, while I don’t have a problem with a binary label per se, I do have a problem with trying to combine a binary label with outcomes-based criteria. The reason is simple: Outcomes-based criteria require the organization to tailor how they comply with the criteria according to the degree of risk posed by the device (also, by the environment). It will be up to the assessor to determine whether the manufacturer’s compliance actions were appropriate for the risk posed by the device and its location.

On the other hand, a binary label doesn’t allow for any considerations of risk or anything else, in determining whether the device deserves the label or not. The assessor needs to be able to make an up-or-down decision, period. That’s only possible with prescriptive requirements, not outcomes-based ones (the December post provides examples of both types of requirements, to illustrate this point).

How did I recommend that NIST resolve this contradiction? I didn’t. I said NIST had to choose outcomes-based criteria or a binary label, but they couldn’t have both. Since I strongly favor outcomes-based (risk-based) requirements in general (and I’ve probably written 50 posts about this idea, with reference to different aspects of the NERC CIP standards), I didn’t want NIST to sacrifice those. So I suggested that NIST use an informational label, not a binary one.

And what did NIST do (drumroll, please)?...Last month, they published an IoT security framework called NIST.IR 8425, which is very close to the set of criteria in the February document (the categories of criteria are exactly the same, while the criteria in each category differ slightly). It’s safe to say that NIST decided to stick with outcomes-based criteria, which is good. But what happened to the labeling program in the February document? Did NIST stick with the binary label?

When I published my most recent post (on IoT security certification) in LinkedIn, Dale Peterson asked in a comment why I hadn’t mentioned “the very recent USG announcement that they will define IoT security labeling.” I hadn’t seen the story on the White House conference yet, so I thought he was referring to the EO. In my reply, I pointed to the February document and the fact that NIST seemed to have been taken out of the device labeling business, since the criteria from the February document had been made into their own NISTIR, with no mention of labeling.

But I now realize Dale was referring to the meeting last week. My response should have been:

What was announced by the White House last week seems to be the end of the idea that NIST can run a certification program (my last post was about the ioXt device certification program, which is of course not a government effort). NIST is quite good at writing nonprescriptive security frameworks, but they showed in December and February that they’re not good at all in developing up-or-down certification programs.

However, that doesn’t mean the White House will be good at developing a device certification program, either. They may very well make the same mistakes that NIST did – although they may boldly break new ground and make completely new mistakes!

My advice to the White House (not that I’ve been asked, of course) is the same as what I gave to NIST (and that was in response to a request for comments in December, although it wasn’t an official comment period): If you try to marry a label that’s really a certification (as NIST wanted to do with their “binary” label) with a set of non-prescriptive guidelines (like NIST.IR 8425), you’re trying to do the impossible. You might as well try to square the circle or invent a perpetual motion machine.  

Make the label an informational one, and make sure the label provides real information (perhaps attestations, like the German label). Then let the consumers make up their own minds about whether the product is secure, based on what they read on the label.

Some consumers won’t look at the label at all, of course. That’s too bad, but there’s no point in even pretending that cybersecurity is anything but a risk management exercise (it’s definitely not a matter of scientific calculations, like Energy Star, although it’s amazing the number of people who think there’s some sort of formula that will make you secure). A risk-based decision has to be made by the individual, period.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Monday, October 24, 2022

How can you make sure your connected devices are secure?


One of my recent posts pointed out that many (and perhaps most) connected devices (aka IoT devices) are far from secure. Furthermore, it pointed out that because these devices are inherently “black boxes”, meant to be managed by the manufacturer and the manufacturer alone, it’s difficult for a user organization to learn what controls might or might not be in place in the device. While a user can easily scan “standalone” software (i.e. software the user installs on standard hardware and O/S platforms) and learn a lot about vulnerabilities in it, it’s very hard to do that for a sealed device, especially if the user doesn’t even know what software is installed inside it.

This is why, even more so than with standalone software, it’s important for the user organization to learn what’s inside the box they’re buying – both the good stuff (the capabilities included in the device, including what is shown in a software bill of materials, or SBOM) and the bad stuff (the software vulnerabilities that were also “included” in the device); it’s also important for the user organization to learn about the manufacturer’s security practices.

The more the user can learn about the technical security controls and manufacturer practices before they purchase the device, the easier it will be to decide a) whether or not to purchase a particular device in the first place, and b) after they’ve decided they will purchase it, what controls – both technical and procedural – they will need to put in place, to compensate for security weaknesses that may be found in the software or firmware installed in the product (or in the manufacturer’s practices).

Questions about technical security controls include:

1.      Does the device allow a universal password? Of course, if it does, that’s not a good thing.

2.      Are security updates applied automatically when possible?

3.      Is standard cryptography utilized?

Questions about manufacturer security practices include:

1.      Does the manufacturer disable unused services?

2.      Has the manufacturer published an end-of-life notification policy?

3.      Is there a “bug bounty” program for security researchers who find vulnerabilities in the product?

The best way for a user organization to learn what technical controls and manufacturer practices are in place for a connected device is if the device has received a previous certification from an independent third party certifier. Because this requires testing the device, not just asking the manufacturer about their practices, it should be performed by a testing laboratory. Americans aren’t too familiar with cybersecurity testing laboratories, but the labs are more common in Europe, where they’ve been testing connected devices for years.

For more than one year, I’ve had the pleasure of working with Red Alert Labs (RAL), a leading European cyber testing lab based in Paris. They provide assessments and certifications of connected devices based on multiple standards, including IEC 62443, Common Criteria and ETSI 303 645RAL is also involved with the European Union Agency for Cybersecurity (ENISA) to develop the EUCC scheme for ICT products and EUCS scheme for Cloud services in the context of the Cybersecurity Act.

Recently, RAL became one of only 8 organizations worldwide that have been selected to provide certifications based on the standards developed by the ioXT Alliance, the global standard for IoT security. Backed by the biggest names in technology and device manufacturing, including Google, Amazon, T-Mobile, Comcast and more, the ioXt Alliance is the only industry-led, global IoT device security and certification program in the world. Devices with the ioXt SmartCert give consumers and retailers greater confidence in a highly connected world.

Besides assessing and certifying connected devices and their manufacturers, Red Alert Labs helps organizations that use these devices to assess the cybersecurity risks they face from devices they are considering for procurement. After procurement, RAL helps those organizations assess and mitigate vulnerabilities identified in software and firmware installed in the devices they use.

RAL will soon be helping user organizations assess the IoT devices they use, based on the recently released NIST.IR 8425, “Profile of the IoT Core Baseline”. This document was explicitly developed in response to Executive Order (EO) 14028 of May 2021, and draws on a number of earlier IoT security documents – including the NIST.IR 8259 series. It also draws on the many comments received by NIST last year in response to their request to the public after the EO charged NIST with putting in place a program for IoT device cybersecurity.

NIST.IR 8425 is clearly meant to be NIST’s main IoT cybersecurity guidance document for the foreseeable future, as well as perhaps their answer to the IoT Cybersecurity Improvement Act of 2020. I believe that US-based public and private sector organizations will standardize on this document as their primary means of assessing IoT device risk for the foreseeable future.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Thursday, October 20, 2022

What should be in an SBOM for cloud services?

Since the beginning of the NTIA Software Transparency Initiative, there was always an understanding (sometimes explicitly articulated in meetings) that, even though everyone knows that computing is moving more and more to the cloud, it was too early to start discussing what an SBOM for a cloud service should contain; first we needed to figure out what an SBOM for on premises (“on-prem”) software should contain.

When the NTIA Initiative ended last year and “moved” to CISA, there was general agreement that we needed to start thinking about SBOMs for cloud services. Thus, one of the five current CISA SBOM/VEX working groups is dedicated to that topic. I’ll admit that I wasn’t too excited about this topic, mainly because I thought we might end up having long discussions which in the end wouldn’t lead anywhere.

However, I’ve been surprised by how productive the group has been and how quickly we’ve come to a rough consensus (although it’s subject to change at any meeting) on what should be in an SBOM for cloud services (aka SaaSBOM, where SaaS means “software as a service”), even though we’re very far from writing a spec at this time. Here are what I believe to be the general points of our rough consensus (most of these were originally brought up by Steve Springett, the co-leader of the CycloneDX BOM format – which has supported SaaSBOMs for at least a couple of years):

First, a BOM – whatever flavor it is – needs to address the risks that are inherent to the environment in which it’s used, and specifically risks due to third parties. A software BOM is designed to address the risk that a vulnerability in a third party component of a software product running on an organization’s premises will be exploited by an attacker who will use this “foothold” to attack software and/or data found on the network, causing harm to the organization.

However, software running in the cloud doesn’t normally have a direct impact on the organization’s network (the exception brought up by one of the members of the SBOM Cloud working group is the case where an app might drop tainted JavaScript code on a user’s desktop). Therefore, it’s not very important that the user organization receive an SBOM listing the components of a cloud app that they use. This is for the better, since cloud apps are constantly changing. Because an SBOM needs to be re-issued whenever there’s been any change in the code of an app, the user would need to receive and analyze many SBOMs every day for the same cloud app; obviously, they couldn’t do their day job if they had to do this.

But it is important that the provider of the cloud service itself examines up-to-date SBOMs for whatever apps they offer in the cloud. The cloud service provider (CSP) should provide their customers an attestation describing various cybersecurity measures they’re taking, including generating and analyzing SBOMs for their apps.

Thus, a BOM distributed to users of a SaaS product doesn’t need to list actual software components, because they pose minimal risk to the user organization. What does pose a risk to the organization? Steve Springett pointed out early on that the main risk to the user organization from cloud software that they utilize runs through the third party services called out by the SaaS product.

What is the nature of this risk? Steve additionally pointed out that the risk to the organization, due to use of a SaaS product in the cloud, relates to data: both a) the risk that the organization’s data will be misused by a third party service called by the SaaS product, and b) the risk that false or misleading data will be provided by a third party service to the SaaS product, with some sort of deleterious consequences for the organization.

An example of a) is if a hospital would provide personal health information (PHI) to a third party app, but the third party would make it available to criminals who would sell the information. An example of b) is if a SaaS SCADA system for a natural gas pipeline system would be “informed” by a third party app that one of their lines was seriously underpressured, meaning they need to greatly increase pressure in the line; this might cause an explosion (this is somewhat like what happened in the San Bruno explosion in 2010, which caused eight deaths – although that wasn’t due to an online SCADA system, of course).

Given this, what should the SaaSBOM look like? It will obviously need to list every service that’s called out by the app itself (Steve Springett provided the workgroup this “services BOM” for Dependency-Track, the open source product he developed in 2010, which now is used over 250 million times a month to identify vulnerabilities for components listed in an SBOM. Dependency-Track isn’t SaaS, but this does illustrate what a BOM of services looks like. Of course, I’m sure Steve regularly makes SBOMs for D-T. In fact, both the software BOM and the services BOM are needed for any on-prem product that makes calls to external services).

However, just having a list of the services called out by a SaaS app doesn’t by itself help identify risks, any more than having an SBOM listing components in a software product by itself identifies risks. Just as vulnerabilities applicable to software components listed in an SBOM need to be identified in the NVD or another vulnerability database like OSS Index, so risks applicable to services need to be identified.

But how does a user identify services risks? Are there “service vulnerability databases”? I had never heard of one, but in yesterday’s workgroup meeting, someone posted exactly that in the chat. I’m sure there’s a lot that could be put into this database, especially regarding risks that aren’t exactly vulnerabilities. For example, I would argue that, if a service is operated out of North Korea, that in itself poses a risk, which could be included in this database as well.

My guess is that whatever the workgroup finally decides should be the format for a SaaSBOM (and that may not be the name for it, either. I’d be happy with “Cloud SBOM”) will include more than just a list of services. For example, Steve suggested that there should be a data flow diagram, listing what types of data (HTTP, PHI, etc.) flow into and out of the SaaS app. Obviously, that would also be important for an end user that wants to know what risks they face.

The bottom line is I’m now optimistic that there might be a SaaSBOM format available next year. What’s important now is to populate the Cloud Vulnerability Database (and similar databases, if any appear), so that, as SaaSBOMs appear, cloud users will be able to learn about the risks inherent in the cloud services that they use.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Monday, October 17, 2022

...and you thought securing software was hard…


In case you haven’t noticed, I’ve written a lot about software supply chain security in the past couple of years. I’ve focused on this topic because that’s where some of the biggest cybersecurity attacks have appeared. SolarWinds, Kaseya, Log4j, ransomware in general, etc. were (and are, since none of them have ended yet) all software supply chain attacks, or at least incidents.

However, even though software currently poses the biggest supply chain security threat (and certainly one of the two or three biggest cybersecurity threats in general), I think there’s ultimately a much bigger threat which we’re just beginning to deal with now: the supply chain of intelligent devices. This includes devices that have been sold for specialized purposes – electronic relays in electric power substations, infusion pumps and heart monitors in hospitals, controllers of all sorts in factories, networking equipment like firewalls and switches, etc. But it also includes a lot of devices used for much more pedestrian purposes, including security cameras and even baby monitors.

I first realized how much greater the risks are with devices through a presentation (and follow up discussions) by Tom Pace of NetRise to the SBOM Forum this spring. I wrote about the presentation itself in this post; I followed it up with 3-4 further posts. The most relevant of those further posts to our current discussion is this one.

NetRise specializes in firmware security, so obviously devices are their main concern. Tom’s initial presentation discussed a network device that, believe it or not, is used in critical infrastructure environments, including the US military. In his original presentation, he noted that he had identified, through analyzing the components of firmware in the device, at least 1,237 unpatched vulnerabilities; however, he was sure there were more. In the second post linked above, I noted he had later told me he was now sure there were at least 40,000 unpatched software and firmware vulnerabilities in that one device, which (using a general guesstimate of 5% of component vulnerabilities being exploitable) probably includes 2,000 exploitable vulnerabilities.

Of course, I doubt you’ve ever heard of a software product with 2,000 unpatched exploitable vulnerabilities; I certainly haven’t. How could there be so many in one device? For one thing, the device contains lots of software and firmware products. But I’m sure we’re talking about 50-100 products, not 1,000. This would still mean there are at least 20 exploitable vulnerabilities per product. A software product with 20 unpatched vulnerabilities would meet a lot of resistance. How is it that a device can have 2,000 vulnerabilities and still be on US military purchase lists?

The clue to this miracle is revealed in both of the posts: The manufacturer of this device has never reported a single vulnerability for it. In fact, the manufacturer has never reported a vulnerability for any of the 50-odd devices it sells. If this were a software product, anyone could scan it and discover at least some of these vulnerabilities. The software supplier would never get away with not reporting any vulnerabilities at all.

But the situation is quite different for a device. It’s a sealed box. The end user can’t even find out what software is in the box, let alone scan it for vulnerabilities. It would certainly help to have a software bill of materials (SBOM) for the device, but – other than a small number of medical device makers, who are facing a mandatory requirement to provide SBOMs to their customers later this year – I have never heard of a device manufacturer that regularly distributes SBOMs to users.

In fact, I know of one key device manufacturer, that sells into an important critical infrastructure industry and has a huge market share, that seems to be actively campaigning against SBOMs. They’ve pulled out the old chestnuts about “roadmap for the attacker”, etc. I always used to have a high opinion of this company for their cybersecurity posture (which is otherwise exemplary), but having seen – and experienced in online and phone conversations – their opposition to SBOMs, I honestly have to wonder: What are they hiding? I sure hope it’s not 40,000 unpatched vulnerabilities in one product.

A few months ago, I and Isaac Dangana of Red Alert Labs published an article in the Journal of Innovation. It noted that, when it comes to vulnerability management for intelligent devices, the device manufacturers hold all the cards. The user can’t usually even learn what software is installed in the device. The manufacturer normally distributes all updates, for all software and firmware in the device, in a single “lump”. When the update gets applied (often “over the air”, without the user being involved at all), the user has no control over what products are patched or not patched; whatever is in the lump gets applied, or else nothing gets applied.

In our recommendations at the end of the article, Isaac and I said that device manufacturers should distribute an SBOM whenever they update the software and firmware in their device. But that wasn’t the most important recommendation, which was that the manufacturer needs to register the device as a single product and report any vulnerabilities in any of the software in the device to that single product.

The way that would work today is:

1.      The manufacturer obtains a CPE name for the device from CVE.org (aka MITRE Corp.). This would normally be done when reporting a CVE (vulnerability) as applicable to the device, but I believe it can be done even before a CVE is reported.

2.      Whenever the manufacturer learns that a CVE is applicable to (i.e., exploitable in) any of the software or firmware products included in the device, they will report this to CVE.org. If the supplier of the individual software product hasn’t reported themselves that their software product has this vulnerability (meaning that an NVD search on the product name and version would uncover the CVE), then the device manufacturer might do that themselves.

3.      However, it’s not too important whether or not the manufacturer reports that the individual software product is vulnerable. What is important is that they report the vulnerability as applicable to the device – i.e. the device’s CPE name. That way, a user of the device will simply have to enter the CPE name in the NVD, in order to see all reported CVEs for any of the software or firmware in the device.

4.      As an aside, I’d like to note that, when the proposal regarding product naming in the NVD is implemented - which the SBOM Forum released recently – there will be no need to use CPE names with the NVD (although that will still be an option). Instead, hardware devices will be identifiable using their GTIN number, which is a set of international standards for naming intelligent devices. One of the GTIN standards is UPC, meaning that, in order to search for a device in the NVD, the user will just need to scan the UPC barcode on their device. Any applicable CVEs should appear immediately (or the user could enter the UPC code manually, which wouldn’t be too hard. The point is that there should be no need to search for the product in the hit-or-miss fashion that NVD users know and love today[i]).

In other words, in order for intelligent device users to realistically be able to manage vulnerabilities in devices, the device needs to be listed in the NVD and any CVEs that are applicable to any software or firmware product included in the device need to be listed as applicable to the device itself. I’ll admit that this won’t be an easy policy to sell to the manufacturers. And while, as a former student of Milton Friedman, I don’t like to see government regulation used in cases where it isn’t absolutely necessary, in this case it may be. Which do you want with your IoT device: 40,000 unpatched vulnerabilities or government regulations?

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] The hardware portion of the SBOM Forum’s proposal will probably take more than a year to be implemented, once the proposal itself (or at least its main points) is approved by CISA. The software portion of the proposal will be implemented much quicker, since no linking of databases is required.

Tuesday, October 11, 2022

How do you prioritize vulnerabilities to patch?

My most recent post quoted Tony Turner saying (in response to a previous post of mine) that SBOMs aren’t being used much by non-developers (that is, organizations whose primary business isn’t developing software), because non-developers are already overwhelmed with vulnerabilities that they’ve learned about without having SBOMs. If they can’t deal with those vulnerabilities, what good does it do them to learn about a potentially much larger set of vulnerabilities that are due to components of the products they use – which they’ll only learn about if they have SBOMs?

I concluded that post by writing:

What’s the solution to this problem? Obviously, it’s certainly not a great situation if software users ignore an entire class of vulnerabilities – those that are due to third-party components included in software products they use – simply because they’re too busy handling another (perhaps smaller) class of vulnerabilities, namely those that are due to the “first-party” code written by the suppliers of the software.

The best situation would be if users could take a holistic view of all vulnerabilities, including both those due to first-party code (which they learn about through vulnerability notifications from the supplier or from looking up the product in the NVD) and those due to third-party components (which they learn about from receiving SBOMs and finding component vulnerabilities in the NVD). They would then allocate their limited time and resources to identifying the most important vulnerabilities of either type and mitigating those. They wouldn’t feel they have to ignore half of the vulnerabilities in their software, because they don’t even have time to learn about them.

So, the question becomes how software users can prioritize their time addressing vulnerabilities in order that they mitigate the maximum amount of software risk possible. The answer needs to take into account the fact that they don’t have unlimited time or unlimited funds available for that effort.

I know some people will answer this by saying, “That’s simple. The user should just find the CVSS score for each exploitable vulnerability and rank the vulnerabilities based on their scores. Then they should start by mitigating the vulnerabilities with the highest scores. When they have exhausted their vulnerability management budget for the year, they should stop and do something else.”

But I also know other people who will say that CVSS score is an almost meaningless number, so it should never be used to prioritize vulnerabilities to mitigate. If so, what’s the solution? Is it another score like EPSS? Is it no score at all, but a different way of ranking software vulnerability risk?

I honestly don’t know. I’d love to hear your ideas.

To summarize what I said, it’s a shame if an organization decides to shut out a potential source of vulnerability information (software bills of materials, or SBOMs) simply because they already know about too many vulnerabilities. This is something like an army deciding they don’t need to conduct intelligence activities anymore, since they already face too many threats for them to deal with easily.

What both the army and the organization need to do is learn about all the vulnerabilities and threats that they face, then figure out how to prioritize their response to them. In responding, they need to use their limited resources in a way that will result in the maximum possible amount of risk being mitigated, given their available resources for responding to the vulnerabilities and threats. It’s likely that the majority of organizations, or at least the majority of organizations who try to prioritize their responses to vulnerabilities in a rational way, will base that response in whole or in part on CVSS score.

Partly in response to my request for ideas, Walter Haydock, who has written a lot about how to prioritize vulnerabilities for patching in his excellent blog, put up this post on LinkedIn. The post began with an assertion frequently made by Walter: “CVSS is dead”. I agree with him that CVSS isn’t the be-all-and-end-all vulnerability measure it was originally purported to be.

Why do I agree with him about CVSS? Let’s think about what we’re trying to measure here: It’s the degree of risk posed to an organization by a particular software vulnerability, usually designated with a CVE number. Risk is composed of likelihood and impact. CVSS is calculated using four “exploitability metrics”: Attack vector, Attack complexity, Privileges required, and User interaction required, along with three “impact metrics”: Confidentiality impact, Integrity impact and Availability impact.

However, all of these seven metrics, to varying degrees, will differ depending on the cyber asset that is attacked. For example, if the asset is an essential part of the energy management system (EMS) in the control center of an electric utility that runs the power grid for a major city, any of the three impacts of its being compromised will be huge. On the other hand, if the cyber asset is used to store recipes for use in the organization’s kitchen, the impact of being compromised will be much less.

There are similar considerations for the exploitability metrics: The EMS asset will be protected with a formidable array of measures (many due to the NERC CIP requirements for High impact systems that run the power grid) that are unlikely to be in place for the recipe server. So, even though in organizations in general, one vulnerability might be more exploitable than another and thus would have a higher exploitability metric, in most organizations, the fact that assets of different importance will have different levels of protection means that these differences will probably swamp the general differences in exploitability that form the basis for the metrics.

In other words, I agree that the CVSS score isn’t very helpful when it comes to prioritizing vulnerability remediation work in an individual organization; and since the EPSS score, which focuses solely on exploitability, is unlikely to be any more helpful (for reasons already provided above), this means I now doubt there is any single metric – or combination of metrics – that can provide a single organization with definitive guidance on which vulnerabilities to prioritize in their remediation efforts.

At this point, Dale Peterson, founder of the S4 conference and without doubt the dean of industrial control system security, jumped in to point out that vulnerability management efforts should be prioritized based on “Impact to the cyber asset...” That is, there’s no good way to prioritize vulnerability remediation efforts based on any consideration other than the impact on each cyber asset if the vulnerability were successfully exploited. I asked,

Dale, do you have an idea of how an organization could assess vulnerabilities across the whole organization (or maybe separately in IT and OT, since they have different purposes) and prioritize what they need to address on an organization-wide basis?...

It has to be a risk-based approach, but CVSS and EPSS obviously aren't adequate in themselves. Like a lot of people, I've always assumed this is a solvable problem. However, I'm no longer so sure of that. This would change how I've been thinking about software risk management.

I was asking Dale how a vulnerability management approach based on assets would work. He replied:

Yes for the ICS world. ICS-Patch decision tree approach, see https://dale-peterson.com/wp-content/uploads/2020/10/ICS-Patch-0_1.pdf

I highly recommend you read the post Dale linked. It provides a detailed methodology for deciding when a patch should be applied to a cyber asset in an OT environment. Dale also points out that his methodology derives inspiration from the Software Engineering Institute/CERT paper called “Prioritizing Vulnerability Response: A Stakeholder-Specific Vulnerability Categorization”, which deals with IT assets (he points out how differences between OT and IT account for the differences between the two methodologies).

Of course, neither Dale’s methodology for OT nor CERT’s methodology for IT is easy to implement; both cry out for automation in any but the simplest environments. But both can clearly be automated, since they’re based on definite, measurable criteria.

Of course, it would be nice if there were a single measure that would solve everybody’s vulnerability prioritization problems. It would also be nice if every child in the world had their own pony. However, neither is a likely outcome in the world we live in. It’s time to focus on a real-world solution to this problem. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Tuesday, October 4, 2022

Here’s the REAL reason why software users aren’t requesting SBOMs

My last post discussed the fact that software users clearly aren’t demanding SBOMs from their suppliers now; the post described my idea for why that’s the case. While I still agree with that post, I have to admit that Tony Turner stated in four sentences (which Tony placed as a comment on my post in LinkedIn) a much more compelling reason than mine – and he did it using about 100 fewer sentences than I used!

I had stated that the reason users aren’t requesting SBOMs is that they don’t see much chance of success when they get one, because of a combination of three low probabilities:

1.      The probability that they’ll be able to find any particular component in the NVD when they look for vulnerabilities. This is currently very low, although many software suppliers and service providers have developed various ad hoc tools to improve their odds in this regard;

2.      The probability that any given component vulnerability is exploitable in the product itself (probably around 5%); and

3.      The probability that there will be VEX documents quickly available from the supplier of the software described in an SBOM, to let the user know of all unexploitable component vulnerabilities. This is currently 0%, since no suppliers are producing VEXes now. It presumably won’t always be zero, but the VEX model depends on suppliers regularly producing lots of VEXes, perhaps hundreds for every SBOM released; I now have strong doubts that will ever happen. My proposal for real-time VEX should substantially increase this probability by making VEX notifications almost trivially easy for suppliers, and not requiring blizzards of VEX documents to be distributed to hundreds of thousands of software-using organizations every day.

Of course, probabilities need to be combined by multiplying them, and multiplying three low percentages will result in an even lower total percentage; the combination of these percentages will be not far from zero. In other words, I’m saying that software users aren’t demanding SBOMs because they think that having them won’t improve their security posture enough to justify the time and effort that will be required for them to analyze them.

While Tony didn’t disagree with what I said, he doesn’t think that’s the main reason. And to be honest, I think he’s right. Tony said:

The biggest issue is they are still struggling to handle all the scanner vulnerabilities and SBOMs just pour salt into the wound. Just operationalizing all of this is a huge undertaking. So yes, it’s lack of SBOM tools, but it’s not just that. It’s a core vulnerability management pain point.

What’s the solution to this problem? Obviously, it’s certainly not a great situation if software users ignore an entire class of vulnerabilities – those that are due to third-party components included in software products they use – simply because they’re too busy handling another (perhaps smaller) class of vulnerabilities, namely those that are due to the “first-party” code written by the suppliers of the software.

The best situation would be if users could take a holistic view of all vulnerabilities, including both those due to first-party code (which they learn about through vulnerability notifications from the supplier or from looking up the product in the NVD) and those due to third-party components (which they learn about from receiving SBOMs and finding component vulnerabilities in the NVD). They would then allocate their limited time and resources to identifying the most important vulnerabilities of either type and mitigating those. They wouldn’t feel they have to ignore half of the vulnerabilities in their software, because they don’t even have time to learn about them.

So, the question becomes how software users can prioritize their time addressing vulnerabilities in order that they mitigate the maximum amount of software risk possible. The answer needs to take into account the fact that they don’t have unlimited time or unlimited funds available for that effort.

I know some people will answer this by saying, “That’s simple. The user should just find the CVSS score for each exploitable vulnerability and rank the vulnerabilities based on their scores. Then they should start by mitigating the vulnerabilities with the highest scores. When they have exhausted their vulnerability management budget for the year, they should stop and do something else.”

But I also know other people who will say that CVSS score is an almost meaningless number, so it should never be used to prioritize vulnerabilities to mitigate. If so, what’s the solution? Is it another score like EPSS? Is it no score at all, but a different way of ranking software vulnerability risk?

I honestly don’t know. I’d love to hear your ideas.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.