Friday, September 22, 2023

An opportunity to get involved with fixing VEX

Regular readers of this blog (both of you!) know that for more than a year I have been concerned about the “showstopper” problems that are inhibiting wide (or even narrow) distribution and utilization of SBOMs by organizations whose primary function is not software development (you know, banks, restaurant chains, baseball teams, police departments, architecture firms…the rest of us).

I say “non-developers”, because there are some developers who are using SBOMs heavily now to manage risks in the products they’re developing. However, what is strange is that I honestly can’t identify a single developer that is regularly providing SBOMs to their customers (since an SBOM can no longer be trusted once the software has been upgraded or even patched, the developer needs to provide a new SBOM with at least every major and minor update of their product) – although I’ll admit I have no visibility into what military and intelligence agency suppliers are doing.

What’s holding SBOMs back? One is the naming problem. An informal group I lead, the SBOM Forum – which is, I’m happy to say, now the OWASP SBOM Forum[i]! – developed a paper almost exactly a year ago that describes how to significantly mitigate this problem in the NVD. The NIST team that runs the NVD has told us they would like to implement what we describe in the paper, but they obviously have a lot on their plate nowadays, such as wondering whether they’ll have a paycheck next week (for the second time this year, I might add).

Thus, while we will certainly engage with the NVD whenever they’re ready and able to do that, the OWASP SBOM Forum isn’t losing sight of the real goal: The US and the rest of the world need a truly global vulnerability database, which is maintained and funded internationally.

We’re now talking with people at ENISA (the EU cyber agency) about their potentially using our recommendations for NIST as the basic design for the vulnerability database they’re developing from scratch – and which they hope to have up and running in two years (this is eminently doable, in my opinion). The project is already funded (although probably not on the scale required for a global database) by Article 12 of the EU NIS 2 legislation, which came into effect last year.

Thus, we’re making progress on the naming problem. But, until this week, we hadn’t even started to address the second serious showstopper problem for SBOMs: the fact that, three years after the VEX idea was first discussed in the NTIA SBOM initiative, there is still no clarity on what VEX is and how tools for VEX production and consumption should work. Therefore, VEX is literally going nowhere at the moment.

However, there is clarity that SBOMs will never go very far until VEX is made to work. This is because suppliers have seen the statistics stating that over 90% of component vulnerabilities in a software product aren’t exploitable in the product itself. The suppliers are worried that, the day after they release their first SBOM, their help desk will be overwhelmed with calls and emails from angry customers demanding to know when they will patch CVE-2023-00666, which has a CVSS score of 10.0 and will probably get them fired if they don’t get it fixed by…how about 2PM this afternoon?

Of course, the help desk people will patiently explain to caller after caller that they don’t need to worry about this vulnerability because the vulnerable module in the library that contains the vulnerability was never used in their product. But maybe, after they reach the 70th such caller in one morning, they will begin to lose their patience…and by the end of the day they’ll submit their resignations.

Yet, I don’t know of a single supplier that has started to produce VEXes on anything but an experimental basis. Cisco recently announced they will start producing VEXes for customers (not for public viewing), but didn’t say how often they will be updated.

While it’s nice to have one-off experiments, VEXes will be needed even more frequently than SBOMs, since new vulnerabilities appear all the time and customers need to know about them in as close to real time as possible (which is why I’ve proposed a VEX server and API. These will eliminate, or at least greatly reduce, the need for VEX documents, as well as take a huge burden off the suppliers). It’s not at all unreasonable to expect VEXes to be updated or issued multiple times a day.

But even among the very small number of experimental VEX documents that have been produced and published, there is no consistency regarding what is covered in a VEX, or even the format for it. This isn’t surprising, given that there are two “platforms” on which VEX can be produced now - CycloneDX (CDX) and CSAF – and in neither one of them has a format for VEX been specified. This has led to wildly differing documents, all bearing the name “VEX”. Given that there is no consistency in the VEX documents being produced, is it any surprise that there are literally no tools available to ingest and utilize VEXes? Yes, there are plenty of tools that can read the JSON of a VEX and make it look a little prettier for the reader, but why should a supplier produce a machine-readable VEX document if it’s just going to be read by humans? They could save a lot of time (and some money) by putting what they have in a PDF and emailing that to their customers.

Last week, the OWASP SBOM Forum decided it was time for us to lead development of VEX “playbooks” – one for CycloneDX VEX and the other for CSAF VEX (the two platforms are so different that there could never be a single VEX playbook that will encompass both platforms). Each platform will have a Producer’s Playbook and a Consumer’s Playbook.

These playbooks will be intended mainly to describe to the toolmaker how VEX documents will be produced and utilized. They will be constructed with enough rigor and detail that there should be no question about what needs to be in the VEX and how it should be presented. Since we will be creating the producer and consumer playbooks at the same time, we will hopefully avoid the common pitfall of not having the production and consumption tool specs exactly in sync.

The two platforms will require very different levels of effort. CycloneDX VEX is much easier to understand than CSAF VEX, since it is built on the same platform that CDX SBOMs utilize (as well as CDX HBOMs, SaaSBOMs, MLBOMs, OBOMs, etc. See the CDX site for the full list of document types that are built on the same framework). There are already plenty of tools that can both create and read CDX documents; the problem is those tools need to be constrained so they only produce and consume VEX documents, rather than making the supplier figure out for itself how to create a VEX in CDX format.[ii]

However, understanding CSAF is much harder, since the spec is about 80-100 pages long and published in small type. If you look at the VEX profile in CSAF, it seems simple, since it just lists a small number of fields. However, it leaves out two mandatory fields (required in all CSAF documents) that are by far the hardest to understand in CSAF: “product tree” and “branches”. The description of these goes on for about 10-15 pages in the spec, and even then, it’s not likely that organizations with no experience in CSAF will be able to understand them quickly.

Fortunately, the group working on the playbooks will include staff members with long CSAF experience from Red Hat, Oracle and Schneider Electric, as well as perhaps staff members from Microsoft, Google and Cisco. Our challenge won’t be to understand CSAF, but to determine the optimal configuration of the “product tree” and “branches” fields (these seem to be related to other fields as well, although my understanding of CSAF is very limited). We may decide that we need to constrain those fields very tightly, perhaps by constraining the VEX document to only addressing one product (although it will have to address multiple versions of the product – and being able to represent and interpret version ranges automatically would be very desirable). Again, our goal is to have a VEX spec that won’t need much if any interpretation on the consumption side.

Why am I so worried about constraining the spec? It’s simple math: a consumption tool will need to be able to interpret every combination of options that might be thrown at it – and the number of combinations is the factorial of the number of independent options. If there are three options, there are 3X2X1=6 possible combinations. If there are 5 options, there are 5X4X3X2X1 options = 120 combinations.

How about if there are 10 options? There it gets trickier: there are 3.6 million possible combinations. And how about 20 options (which may well be the case if you consider the entire CSAF 2 spec)? Not a big deal…then there are only 2.4 quadrillion possible options! My guess is most developers aren’t going to feel like devoting the next couple of millennia to writing a CSAF consumption tool unless the options are severely constrained.

I’m telling you this because this will be a very interesting exercise as well as a very important one, since as I’ve said many times, “No VEX, no SBOMs.” VEX has to be fixed if we will ever have widespread distribution and use of SBOMs. Next week our group will organize itself, and we’ll aim for two weeks from now for our first meeting (I’ll send out a Doodle poll next week to judge a proper time). I think biweekly meetings will be fine. In fact, the documents we create – starting with a document on VEX use cases – will be available on Google Docs for comments and suggested edits at any time. If you want to join us or at least be on the mailing list, send me an email.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] However, please give us another two weeks to get our pages set up on both the OWASP and GitHub sites. 

[ii] Currently, the best way to determine how to create a VEX document on either platform is to go through the 10 or so examples, for each platform, found in the VEX Use Cases document from last year.

Wednesday, September 20, 2023

Two good resources on SBOMs from Fortress


Fortress Information Security, an organization I have worked for as a consultant for over three years, has recently developed two great resources for anyone interested in or involved with software bills of materials: a webinar and an e-book. I recommend both of these.

WEBINAR: Software Risks: Understanding your Software Supply Chain Security

The webinar takes place tomorrow at noon ET. The link for both registration and attendance is here. If you can’t attend tomorrow, you’ll be able to view the recording later.

The speakers are both very knowledgeable about SBOMs: One is Tom Pace, the Co-founder and CEO of NetRise (and a familiar name to readers of this blog. My most recent post featuring him is here) and Bryan Cowan of Fortress, who is Product Owner for SBOMs at Fortress.

The topics to be covered in the webinar “include SBOM adoption drivers, SBOM risk insights, example use cases, and a business case for managing risk with SBOMs.” The webinar will last between 40 and 60 minutes, although the last 20 minutes or so will be reserved for questions.

Whitepaper: SBOM Use Cases for Asset Owners

Bryan has been busy recently, since he co-authored (with Ty Short) the above white paper, which is available here. This takes a very different approach to SBOMs than just about anything I’ve seen in writing (including my posts) so far: Instead of focusing on the use cases of licensing (the original SBOM use case) and software vulnerability management (the use case behind Executive Order 14028 and most articles on SBOMs, including my posts), the paper is clearly based on real-world research into the possible uses of SBOMs by public and private organizations. The result is quite good and very readable.

I recommend you look at both of these! And I promise that neither one will require an excessive amount of your time.  

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, September 14, 2023

Cisco’s important VEX announcement

Warning: The post below starts on an optimistic note but ends on a pessimistic one. Parental discretion is advised.

Yesterday, Cisco made an important announcement, although people who aren’t VEX addicts probably didn’t see it and if they did, it didn’t register as important. I had been looking for the short announcement about VEX that Cisco made a few weeks ago and found it odd that it didn’t appear in searches on Google or the Cisco web site. However, I see now that yesterday’s announcement filled in the details of the previous announcement, which was just a placeholder for the real thing.

The announcement was written by Omar Santos of Cisco, who is also the leader of the OASIS CSAF project. CSAF is the leading general vulnerability reporting format, and the VEX “profile” in CSAF adapts that format to the more limited needs of VEX (mainly, it limits the large number of status designations in the full CSAF format to four: “affected” (i.e., exploitable), “not affected”, “fixed” and “under investigation”).

The announcement is of the “Cisco Vulnerability Repository” (CVR), although I want to point out that name is a mistake. The software world has enough vulnerabilities that we don’t need to have Cisco create a repository to protect them 😊! The best name would be “Cisco VEX Repository” (which doesn’t require a change of acronym, BTW), since that’s what it is. A second-best name would be “Cisco Vulnerability Advisory Repository” or just “Vulnerability Advisory Repository”.

In any case, why do I think this is a significant announcement? For one thing, this is just the second announcement by a major software or device supplier (Cisco is both, of course) that they will be producing VEX documents for customers regularly. The first announcement was by Red Hat in February; however, within the last two months Red Hat changed their mind about VEX (nothing wrong with that, of course!) and renamed the documents they were calling VEX back to their previous name: security advisories.

However, I believe that RH will soon announce VEX documents that are a lot like the ones Cisco just announced – since they are big supporters of CSAF themselves and they have a seat on the CSAF board.

The announcement lists three types of vulnerabilities that will trigger VEX notifications:

  • Vulnerabilities found in the Cybersecurity and Infrastructure Security Agency (CISA)’s Known Exploited Vulnerabilities (KEV) Catalog
  • Vulnerabilities that Cisco has determined to be high-risk
  • Vulnerabilities whose status has been requested using CVR 

This clarifies what Cisco's VEX strategy is (and I think it's a good one): Instead of putting out VEX documents that discuss the status of multiple vulnerabilities in a particular product (which is the strategy that a lot of people have been thinking of), this lists the status (exploitable or not) of one significant vulnerability in one or more products. The example in the announcement utilizes the "product family" capability of CSAF, since it applies to the entire Catalyst 9800 Series family of wireless controllers.

Perhaps the biggest problem with VEX is that user organizations already face a huge backlog of unpatched software vulnerabilities. If they are to learn about new ones, they need to know just the ones that pose a significant risk to them. Meanwhile, software suppliers who have looked at creating VEXes (and very few have even looked at it, I’m sure) have realized that determining whether a particular component vulnerability is exploitable in a particular product/version is hard and time-intensive (unless they want to adapt the “shortcut” I described in my most recent post, which will require them to patch a lot more component vulnerabilities than otherwise. I don’t know whether that’s a good trade-off or not, of course; that will depend on lots of different factors).

Cisco is clearly quite aware of this problem, which is why they have stated up front that they will only consider certain component vulnerabilities for remediation, meaning they won’t even bother to determine whether any others are exploitable or not. I think other suppliers should emulate what Cisco has done here: state very clearly which vulnerabilities they will consider for remediation and which they won’t. Their customers should appreciate this, since they won’t be burdened with a lot of patches that mitigate little risk (of course, the risk to Cisco in this strategy is that a high-risk vulnerability like log4shell will slip through their net and not be patched. But given the level of awareness of vulnerabilities in general nowadays, it’s likely that any such vulnerability will quickly be pointed to by their customers).

Regarding Cisco’s three criteria for considering a vulnerability:

1.      Almost by definition, the KEV catalog is a list of the most risky vulnerabilities

2.      Cisco’s own list of high-risk vulnerabilities is important, since presumably Cisco knows best which risks to its products are high ones; but,

3.      I especially like the third category: "vulnerabilities whose status has been requested using CVR". This gives Cisco’s customers a voice in deciding which vulnerabilities to consider for remediation. Moreover, notice the announcement doesn’t say that customers will only be able to access VEX information for products they own (in fact, it doesn’t sound like Cisco would even have any way to implement that policy, since they’re reporting VEX information by vulnerability, not by product). And because a large percentage of all organizations in the world are customers of at least one Cisco product or service, in theory they will all be able both to make suggestions for vulnerabilities to address and to see all VEX notifications (although I admit I’m guessing this is the case).

So I think this announcement might set a pattern for a lot of other organizations; at least I hope it does.

But there’s a downside to this announcement as well. It can be summarized in three questions:

1. Omar Santos is chairman of the CSAF committee. Pete Allor of Red Hat is one of the most active participants on that committee. Oracle has also started putting out CSAF VEX documents, and they also have a long history with CSAF and its predecessor format CVRF. That’s great.

However, CSAF is an extremely complex format, as a quick skim through the 100-page-plus[i] draft specification will quickly reveal. Even more importantly, figuring out how to fill out the mandatory “product tree” and “branches” fields requires close study of about 15 pages of very dense text within the CSAF format, and even then there are so many options available that a supplier that doesn’t benefit from long experience with CVRF/CSAF is unlikely to devote the time to do that. After all, nobody has a gun to their head, requiring them to produce VEX documents at all, let alone CSAF VEXes. Why go through the aggravation?

Aha, but you don’t need to know all of CSAF to create a VEX in that format. What about the VEX profile? That limits the fields you need to put in the document (which is good), but it omits mention of the need for the “baseline” elements required in every CSAF document, including product tree and branches.

2. Then, what about a tool to create a VEX document in CSAF? There are no VEX-specific tools; in fact, I know of no tools that create any type of CSAF document – meaning they ask the supplier questions about what they want to include (products, vulnerabilities, etc.) and then put them into a CSAF document, requiring no direct CSAF knowledge on the VEX author’s side. The only CSAF tool I know of is Secvisogram. This is a good editor, but it requires knowledge of the CSAF format. Again, it will be quite a hard row for an organization to come up to speed on creating CSAF documents of any type, unless they have somebody that can devote a substantial block of their time to learning the format and creating documents with it – using Secvisogram, faute de mieux.

3. But let’s say a supplier has created a CSAF VEX document? What are their customers going to do with it? Are there low-cost, commercially-supported tools available that will ingest SBOMs and CSAF VEX documents, then provide the user with a continually updated list of exploitable component vulnerabilities in a product/version they use, which has a recent SBOM? No, there aren’t. How about open source tools that do that? Again, no.

So, how about any tools that utilize CSAF documents for security analysis purposes? Here is the official list of CSAF tools. Do you see anything that meets that criterion?[ii] I don’t.

In fact, I’ve asked several large suppliers that produce CSAF documents (whether VEX or just vulnerability advisories – which is the main use case for CSAF, of course) what their users do with them. They honestly don’t know – except that some customers may have developed their own tooling for that purpose.

So, here are the two big problems with CSAF VEX (these each have several parts, and a small number of those parts also apply to the CycloneDX VEX format. However, I know it is many times easier both to create and to utilize CDX VEX documents than CSAF VEX documents):

1.      Any software or device supplier that wants to learn how to create VEX documents in CSAF needs to be prepared to spend a lot of time doing so. I doubt many suppliers will be interested in doing that.

2.      Those suppliers that distribute VEX documents would be better off developing a PDF format and sending PDF VEXes out by email (or just putting them on a portal).  I know one very large device maker that is doing that with SBOMs: They produce both JSON-based and PDF-based SPDX SBOMs and make them both available on their portal. They report much better uptake for the PDF files.

So – as often happens in this blog – we have run up into one of the “showstopper” problems for SBOMs and VEX: the lack of tooling, especially the lack of low cost, commercially supported tools that are essential for widespread use of both SBOMs and VEX documents by organizations whose primary business isn’t software development (i.e., at least 98% of organizations, public and private, worldwide).

Can this problem be solved? It can be partially mitigated, but substantial mitigation (I won’t even use the word “full”) also requires addressing the other showstopper problems, especially the naming problem. One partial mitigation would be a “playbook” for CSAF VEX, like the “playbooks” that were developed during the NTIA SBOM initiative (here are the two most important of these: for producers and for consumers).

I know that a draft playbook for Cyclone DX VEX has already been developed. However, I don’t know of any current attempt to develop a playbook for CSAF VEX. One is sorely needed.

I told you this post would be depressing, but you didn’t believe me!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I’m guessing at the page count, since the spec doesn't have page numbers. 

[ii] I believe there are a few vulnerability scanning tools that can ingest CSAF files and use the information in them to remove non-exploitable vulnerabilities from scanning results. This is certainly a useful result, but it has nothing to do with SBOMs or product components.

Thursday, September 7, 2023

Redefining VEX Part I

This is the first in a two-part series about VEX. Currently, VEX is going nowhere fast. I now know of just one organization – Cisco - that has even announced they’re producing VEXes for their products, although the only examples they linked to in their announcement were the same CSAF examples that appeared in CISA’s VEX Use Cases document, published in April 2022 (which also had CycloneDX VEX examples).

Another organization that was producing documents that they called “VEX” – Red Hat – has changed their mind about whether they are really VEXes, and has changed their name back to their original name: security advisories. They explained this to the SBOM Forum in a presentation more than a month ago, but they also stated they will be putting out a new type of document soon that they are calling a VEX (and since the only published VEX definition – near the beginning of the same Use Cases document – essentially just says a VEX makes statements about vulnerabilities, it doesn’t make sense now to even debate what is or isn’t a VEX. If the author calls it a VEX, it’s a VEX. End of story).

I want to point out that I don’t blame Red Hat for any of this, since they have always been a leader in producing machine readable vulnerability notifications (you’ll notice that the notifications on the page I referenced above go back to 2001), as well as doing lots of work of benefit to the software vulnerability management community. For instance, they sit on the boards of both the OASIS CSAF group and CVE.org.

Moreover, they are one of a handful of Root CVE Numbering Authorities (CNA) and are available to help open source software projects report their own vulnerabilities, as well as proactively reporting vulnerabilities in their own products. Pete Allor, Director of Product Security for Red Hat (and an active member of the SBOM Forum), told me once that Red Hat almost considers this their duty, because of their unique relationship with the open source community.

But other than Red Hat and Cisco, I know of no organization that has made any public announcement about producing VEX documents, and I have seen no other VEX documents from any source, except for a few demonstration documents.[i]

One of the biggest problems with VEX is that proving that a vulnerability has “not affected” status (which is, of course, the main reason that VEX was developed) often requires proving a negative. For example, if the supplier believes there is no “code path” that would allow an attacker to reach the vulnerable code (in order to exploit it), they would literally need to be able to show that none of the perhaps thousands of paths an attacker might utilize to reach that code will allow them to reach it, in order to “prove” the vulnerability isn’t exploitable in the product.

However, the five “status justifications” offer a way out of this problem. Whenever the status of a vulnerability in VEX is “not affected”, a status justification needs to be provided to justify the “not affected” status. Note that these are called “justifications” in CycloneDX VEX, and there are nine of them, vs. five in CSAF. The CDX justifications are a superset of the CSAF status justifications and were developed almost two years before the CSAF ones.

While three of the status justifications suffer from the need to prove a negative (they are “Vulnerable_code_cannot_be_controlled_by_adversary”, “Vulnerable_code_not_in_execute_path”, and “Inline_mitigations_already_exist”), two of them do not. They are “Component_not_present” (meaning that the vulnerable component that appeared in the SBOM isn’t actually in the product) and “Vulnerable_code_not_present” (meaning that, even though the component is present in the product, in this case the vulnerable code has been removed from the component).

Neither of the latter two status justifications requires proving a negative. If someone is skeptical about either justification, it won’t be hard for the supplier to prove it is valid. If a supplier asserts that either one of these is the reason why they used “not affected” status, they aren’t likely to be questioned.

Therefore, a way to “tighten up” the use of “not affected” in VEX would be to require that it only be used in a case where the reason for the “not affected” designation is either “Component_not_present” or “Vulnerable_code_not_present”. In all other cases, every vulnerability should be given the status of “affected”.

Of course, there may be some suppliers who aren’t bothered by the fact that three of the status justifications require proving a negative – plus some of their customers may be quite happy to accept all five status justifications as valid.

In this case, there wouldn’t need to be any change from what is now assumed in VEX: that all five of the status justifications are valid. In fact, when there is tooling available to utilize VEX information, there could be a simple software option: Those software users (e.g. hospitals, electric utilities, and military contractors) that believe they require a higher assurance level can choose the first option described, in which there are only two valid status justifications. Those users that are comfortable with the “looser” standard can accept all five status justifications, which will probably be the default option when and if VEX “consumption” tooling appears.

How about the nine “justifications” in CycloneDX? They are:

  • code_not_present = the code has been removed or tree-shaked.
  • code_not_reachable = the vulnerable code is not invoked at runtime.
  • requires_configuration = exploitability requires a configurable option to be set/unset.
  • requires_dependency = exploitability requires a dependency that is not present.
  • requires_environment = exploitability requires a certain environment which is not present.
  • protected_by_compiler = exploitability requires a compiler flag to be set/unset.
  • protected_at_runtime = exploits are prevented at runtime.
  • protected_at_perimeter = attacks are blocked at physical, logical, or network perimeter.
  • protected_by_mitigating_control = preventative measures have been implemented that reduce the likelihood and/or impact of the vulnerability

The CDX equivalent of “component not present” is “requires dependency”. The CDX equivalent of “vulnerable code not present” is “code not present”. High-assurance users might want to require that only these two justifications be allowed to accompany a “not affected” status, while the other seven justifications would require “affected” status.

How should these two options be positioned – i.e., the high assurance and “normal assurance” VEX options? When and if end user VEX “consumption” tooling appears, it should offer both options, although in general it would be better to make the “normal” configuration (i.e., all five status justifications are acceptable for “not affected” status) the default, with the “high assurance” option (i.e., only the “component not present” and “vulnerable code not present” justifications are acceptable for “not affected” status) available for organizations that believe they need that.

Perhaps the best aspect of this tooling flexibility is that it places no burden on the supplier. As long as they provide one of the standard five CSAF status justifications (or one of the nine CDX justifications) whenever the status of a vulnerability is listed as “not affected”, the end user will make the decision, through their tooling choice, of how it will be interpreted.

I feel that uncertainty regarding questions like the above is inhibiting production of VEX documents by suppliers; what I’m suggesting here may somewhat alleviate that uncertainty, although it is hardly the only problem with VEX. In the exciting Part II of this series, I’ll tell you about another important development affecting VEX, which may also help to make VEX more palatable to suppliers.


[i] You may have heard of OpenVEX, which is sometimes pointed to as a third VEX format. I think it’s a good format, but it’s for a different use case than VEX was originally developed for: clarifying for software uses which component vulnerabilities they learn about through an SBOM should not be a cause for concern. 

OpenVEX addresses the use case of software scanners that throw off false positive vulnerability findings; it allows a supplier to list – in a machine readable format - typical false positive results for their products. A scanner vendor can consume these documents regularly, and thus not show those false positives in their scan results. That’s a great use case and I imagine OpenVEX does it well, but it has nothing to do with SBOMs. 

OpenVEX can be used for the “original” VEX purpose as well, but it’s limited for that because it doesn’t treat version numbers independently of the product names. Instead, it creates synthetic “products” that combine the product name and the version number. This makes it impossible to create version ranges, which are essential for the original VEX use case (I explained that limitation in this post). But since, as I just said, there’s no specific definition of what a VEX is, I’m fine if OpenVEX keeps using the name for their product.

 

Saturday, September 2, 2023

A power grid attack has caused the first "homicide by power outage"

An article in today’s Washington Post led off with this sentence: “An elderly North Carolina woman’s death when her oxygen machine failed during a power outage has been ruled a homicide by the state medical examiner and blamed on what authorities said was intentional gunfire that hit power substations in her area.”

The “intentional gunfire” in question occurred during shooting attacks on two power distribution substations in Moore County, NC on December 3, 2022. They left over 40,000 people without power for multiple days (which itself was a problem, since there’s so much redundancy built into the power grid that any outage should be remediated within hours, not days. At least, that’s the idea…). I wrote about these attacks in this post on Dec. 5 and this one on Dec. 7 (a date that is remembered for the far more serious attack on Pearl Harbor). These attacks were imitated soon afterwards in Oregon and Washington state, although no outage was caused there.

In the second post, I included this sentence, “(The outage) certainly caused damage, and people were injured in car crashes…. We may hear later about people on oxygen at home, etc. that were victims as well.” I don’t claim prescience, but that seems to be exactly what happened in this case. An elderly woman’s oxygen compression machine, which helped her breathe at night, didn’t work because of the outage; she died as a result. I agree that’s murder, given that the outage was human-caused.

Of course, nobody has been charged with murder because the police have so far not found whoever shot up the substations – and of course, they may never find them. But at least this serves as a warning to whoever thinks that causing a power outage is a way to make a political point. This was the reigning theory in December for the attacks in all three states.

These attacks were coordinated. People get drunk and shoot up substations all the time, but they don’t make a point of shooting up multiple substations at once – and they don’t usually know exactly where to shoot in order to cause an outage. These people seemed to know that.

I think that this type of attack – a physical attack on distribution substations – is the most likely to cause an outage compared to any cyberattack, or even compared to a physical attack on generation or transmission substations (which are much better protected now, thanks to the Metcalf attack and the subsequent development and implementation of NERC CIP-014). Of course, an attack on distribution would almost by definition never cause a cascading outage like the August 2013 event (which wasn’t caused by a deliberate attack. Why bother to attack if carelessness and lax procedures can produce the same effect?).

In any case, I’m glad the medical examiner ruled the woman’s death a homicide. That alone may prevent some future attacks.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, August 31, 2023

Dale Peterson’s interview with Steve Springett

Recently, Dale Peterson interviewed Steve Springett, leader of the OWASP Dependency Track and CycloneDX projects, for his podcast. I have watched a number of podcasts with Steve, and I never come away without a lot of notes; for this podcast, I took six pages’ worth. This was in part because Dale asked some really excellent questions, which showed he had done a lot of research beforehand. Between the two of them, it was quite a show.

I recommend you listen to the whole podcast, but I’d like to address three topics that came up – among many.

Contractual challenges to sharing SBOMs

Steve and Dale both agreed early in the podcast that, while SBOMs are being heavily used by software developers to learn about and manage vulnerabilities in products they’re developing, they’re hardly being distributed to or used by end user organizations at all (i.e., organizations whose primary business isn’t developing software, which is of course well over 99% of the organizations on the planet).

One of the reasons Steve pointed to for this problem (and it is a problem!) was contractual. He didn’t elaborate on that, but I can guess that, since there are no standards or official guidelines for producing or distributing SBOMs, any organization that tries to negotiate with suppliers on the terms under which they’ll provide their SBOMs won’t find this to be easy at all.

However, I’ve already discovered the solution to this problem. It’s a lot like the solution the doctor proposed to his patient who complained that if he performed an unusual arm motion, it hurt. The doctor’s solution? “Don’t do that.”

And that’s my solution to the problem of contracts for SBOMs being hard to negotiate: Don’t bother with them. Given the complete lack of standards or official guidelines for producing or distributing SBOMs, it’s simply way too early to even consider using contract language to “force” the supplier to give you exactly the type of SBOM you want, in exactly the way you want it.

However, let’s say the supplier gives you the perfect SBOM that you want. What are you going to do with it on your own? There are currently no low-cost, commercially supported tools that ingest SBOMs (and VEX documents, although they’re hardly being distributed at all today) and output a list of component risks for the user, including a list of exploitable component vulnerabilities. Yet, this is the use case that most end users have in mind for SBOMs.

This is why I recently suggested that we forget about using contract language for SBOMs for the time being (except for perhaps an NDA) and simply focus on mutual learning: the suppliers will produce the SBOMs they think their users want and their users will tell them whether and how they’re able to use them. This will be effectively a large-scale proof of concept and, as with any proof of concept exercise, there will need to be mechanisms for gathering and aggregating the lessons learned.

I know of only one active SBOM proof of concept that is exchanging SBOMs today - the healthcare PoC being sponsored by the Healthcare ISAC. Moreover, that PoC has just around ten participants (medical device makers and large hospital organizations). Wouldn’t it be great if we could get a lot more suppliers and users, in many industries, exchanging SBOMs and learning from them – and then sharing what they’ve learned from the experience? We can do that if we forget about contract language for now, and simply focus on what we can learn.

The only way SBOMs are going to provide value to end users in the near term

After the above discussion, Dale pointed to the one solution that can lead to SBOMs (or at least the information to be derived from them) being widely used by non-developers in the next few years. The solution won’t be to have low-cost, commercially supported tools that every organization can use. I used to think that those tools would someday magically appear, but they haven’t appeared yet and I know of none that are even on the horizon.

More importantly, there are serious issues like the “naming problem” (which Dale brought up later in the podcast) and how to get useful VEX information out (I plan a post on this problem next week), which are currently standing in the way of easy-to-use consumer tools. These problems won’t ever be completely solved, but they can be addressed to a degree that makes usable consumer tools possible. We’re simply not there yet, and won’t be for years.

However, Dale pointed out that he sees a more realistic path to SBOMs being usable to consumers in the near term, and that’s third party service providers; I agree with him 100% on this. The fact is that the tools required to analyze SBOMs and VEX information, and then report component risks to end users, are available now. However, they’re almost all not low cost or commercially supported or both – and that’s what will be needed for true consumer tools. Moreover, an end user today would need to string together several open source tools (and address data formatting issues, etc.), to get the required functionality. Few businesses or government agencies have the technical chops and available time to do this.

However, third party service providers are a different story. They know that whatever tooling they put in place will allow them to provide this service to many end user organizations. The economic picture will be very different for them.

Exactly who will these service providers be? I honestly don’t know now, but I do know that, given the high and growing interest in learning about software component vulnerabilities, they will appear. The important technical problems have all been solved in principle. Even the naming problem has been “solved” by hundreds or even thousands of software suppliers and their consultants, since there are lots of ad hoc workarounds available, including AI/ML routines, fuzzy logic, etc. No non-software suppliers will want to invest in developing these routines for their own use, but again, the economic picture changes drastically for service providers who can amortize the cost of doing so over hundreds or thousands of customers and products.

There’s one important aspect of these service providers that didn’t come up in the podcast, but is one that I’ve written about once or twice: It’s clear to me (at least) that end users shouldn’t be responsible for paying the service providers for their analysis. After all, the supplier is the one that chose the component and included it in their product. If they chose a component that poses big risks for end users, why should the end users have to find this out for themselves? More importantly, if there are 10,000 customers of a product, why should they each have to pay a service provider to tell them that CVE-2023-12345 is exploitable in the product they use, and they should immediately contact the supplier to find out when they’ll patch it?

Instead, why doesn’t the supplier pay the service provider to do this and distribute the results to their users? I remember when it first became clear in the 1990s that software vulnerabilities weren’t rare events (as had been the general opinion), but could be counted on to appear constantly in just about any product. At first, software suppliers tried to charge their users for the privilege of receiving patches for the vulnerabilities in their product – often through saying that only users who pay for maintenance would receive patches.

Of course, this idea quickly faded, and now I don’t know a single supplier that doesn’t develop and distribute security patches to their customers for free. I expect the same thing to happen with SBOM analysis. While I don’t think most suppliers will want to make the investment in providing the service I described above for their customers, I think they will gladly pay a third party to provide that service on their behalf. Especially when all their competitors start doing it and they realize they won’t be in business much longer unless they do as well. Just like what happened with security patches.

Side-channel attacks

Toward the end of the podcast, Dale brought up the topic of VEX. He had heard Steve say on a different podcast that “VEX is a missed opportunity because it doesn’t represent risk.” He asked what Steve meant by that.

Steve’s first response was fairly complicated (and led to a more complicated discussion than I want to address at the moment), but his second response was very straightforward: VEX considers a single vulnerability in isolation and provides a binary answer to the question of whether that vulnerability is exploitable in a particular product (and not just a particular product, but a particular version of that product).

However, Steve pointed out that hackers often don’t just try to exploit one vulnerability and then give up if they don’t succeed in compromising the product using that vulnerability. Instead, they often try to exploit a different vulnerability. Then, if they’re successful with that attack, they “pivot” to exploiting the vulnerability they had originally aimed for. This is known as a “side channel” or “chained” attack. When you consider these attacks, then VEX’s answer to whether a vulnerability is exploitable in the product or not becomes more complicated – since you need to consider which side channel attacks might be used to exploit that vulnerability, not just direct attacks.

This was a question that was discussed in the VEX working group when it was under NTIA (I don’t think we have discussed it much under CISA). Our feeling at the time was that, if we started to consider side channel attacks, then essentially every vulnerability would be exploitable. Why even have a VEX in that case? Instead of spending their time putting out VEX information, a supplier would just need to admit they need to patch every component vulnerability, even though they know that over 90% of them probably aren’t exploitable.

Moreover, end users will also need to admit that, even though they on average are currently only applying about 15% of the patches they receive for products installed on their network, they’ll now face a huge influx of additional patches, which they will have to add to their already voluminous patching queue. Since we (the VEX working group) didn’t want to provide those answers either to suppliers or end users, we decided to proceed on the assumption that only directly exploitable vulnerabilities need to be designated as exploitable in VEX.

However, enough people (besides Steve) have raised this issue that I now think we need to admit that there are more sophisticated hackers out there who know how to use side channel attacks. So, for some users (especially those in high assurance use cases, like military contractors, hospitals, and OT in general), it does make sense to patch any component vulnerability that is known to be present in the product, and for users to apply those patches.

Fortunately, there are some cases in which even a side channel attack won’t be able to exploit a particular vulnerability in a product, meaning that VEX can provide an acceptable solution, even for high assurance users. I’ll discuss that soon, hopefully next week.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Thursday, August 24, 2023

“British cuisine”, “Device security”, and other oxymorons

 

In the past couple of months, it seems events have been conspiring to tell me two things:

1.      Security in almost all types of intelligent devices is in terrible shape and borders on being nonexistent. However,

2.      Even if you want to take security matters for a device into your own hands (e.g., do your own vulnerability management), you’re going to have a hard time doing that.

The first event was when I wrote this post, which reported that a person involved in product security for a major medical device maker told me he had never reported a vulnerability for one of their devices, and in fact he wouldn’t know how to report one. Since most CVE reports (which form the basis for all listings in the NVD and the great majority of them in other vulnerability databases) are created by the developer of the product reporting the vulnerability (either a software product or an intelligent device), this probably means that none of that company’s products lists any vulnerabilities in the NVD.

In fact, since a CPE name is only created when a vulnerability is reported for a product (more correctly, a particular version of a product), this probably means nothing will be found when someone searches for that product in the NVD. This isn’t good, but it’s made worse by the fact that the message that displays when that happens is “There are 0 matching records.” This is the same message that displays when there is a CPE name for the product, but there aren’t any vulnerabilities reported against the product. In other words, a user that searches on a product and receives that message should either be elated that it’s a “vulnerability-free” product or worried that it might be loaded with vulnerabilities, but the supplier of the product doesn’t report them. How do you tell which is the correct conclusion to draw? Perhaps a coin flip?

My friend defended the fact that his company doesn’t report product vulnerabilities by saying they would only report a vulnerability if they hadn’t patched it. However, this gets the situation completely backwards: A responsible supplier (either of software or devices) usually only reports vulnerabilities they have patched, since to report an unpatched vulnerability might invite hackers to try to exploit that vulnerability whenever they encounter that product. But the responsible supplier also patches vulnerabilities in their products as quickly as possible.

The above scenario - in which a customer is told about a vulnerability in a product they use and therefore applies the patch that has been developed to mitigate the vulnerability (usually the patch is linked in the same report that announces the vulnerability) – falls down in the case of intelligent devices, since the customer almost never has the option of applying a patch for a particular vulnerability. Instead, the customer has to wait for the next time the supplier updates all the software in the device (or at least all that’s changed since the last device update).

I used to think that it wasn’t such a terrible thing that device users would have to wait for the next device update to learn that a vulnerability had been patched. I thought this, because I assumed that device manufacturers update the software in their device about every month. If they did this, it might even justify not revealing the vulnerability until after it’s patched on the next update, since waiting a couple of weeks for a patch to be applied is probably close to normal in the software world.

However, when I asked my friend how often his company updates the software in their devices (which are medical devices, remember. They aren’t baby monitors or recipe servers), he said they do it annually. In other words, let’s say a device manufacturer discovers a serious vulnerability in one of their devices the day after they released their annual update. Assuming they’re on an annual update schedule, this means the vulnerability won’t be patched in the field for 364 days.

Will the manufacturer let their customers know about this serious vulnerability and the fact that it won’t be fixed in the device they use for another 364 days – so they can at least apply some alternative mitigation to the device, even perhaps pulling it from their network altogether? If they’re like my friend’s company, the answer seems to be no. My friend said they won’t issue a notification for the vulnerability until it’s patched in the field. Where does that leave their customers if they’re attacked using that vulnerability during those 364 days? How about SOL?

So, the manufacturer of a device needs to report vulnerabilities for the third-party software and firmware products included in their device, as well as for the code in the device that they wrote themselves (the first-party software). Moreover, they need to report all these vulnerabilities to a CPE that identifies the device, not to one of the myriad software or firmware products contained in the device.

The reason for doing this is obvious (to everyone except the device manufacturers, it seems): Just like for a “standalone” software product, the customer of an intelligent device needs to be able to learn about all the vulnerabilities in their device in a single search. Even if they have a current SBOM for the device (a dubious assumption today, to be sure), they shouldn’t have to invest all the time and effort required to look up each of those products in a vulnerability database (assuming they even have a searchable identifier, which means a purl or CPE). The supplier should be tracking all vulnerabilities for third- and first-party software and firmware in their device, and posting them all to the single CPE for the version of the device where those vulnerabilities are found.

What about the fact that a lot of the vulnerabilities that a manufacturer learns about (in a vulnerability database like the NVD) for the individual software and firmware products installed in their device won’t in fact be exploitable in the device itself – so they don’t actually pose a risk to their customers? Should the supplier still report every vulnerability regardless of its exploitability status, and then issue VEX information to let their customer know which of those vulnerabilities aren’t exploitable?

Were there any low cost, easy-to-use and commercially supported tools available for software users, which would use VEX information to narrow down a list of vulnerabilities in a particular product/version to just the exploitable ones, I would say by all means, that’s the way to do it. However, given that this is far from being the case, I’m willing to concede that the manufacturer can apply the exploitability information on their own and just report the small percentage of component vulnerabilities that are exploitable in their device, not the huge percentage (usually around twenty times the percentage of exploitable vulnerabilities) that aren’t exploitable. That will at least make the job much more manageable.

However, another thing that device suppliers need to do is provide much more regular updates to their devices. Quarterly updates would be a big improvement over annual ones, so maybe that’s what they should aim for today. But the manufacturers also need to let their customers know about important new vulnerabilities in their devices and not wait until the next device update to do this. That way, if a customer wants to implement an alternative mitigation during the time remaining before the next update, they’ll be able to do this (the manufacturer should suggest alternative mitigations in their vulnerability report).

At the end of his recent presentation to the SBOM Forum, Tom Pace of NetRise admitted that, given the large number of vulnerabilities that are likely to be found in most intelligent devices (as he pointed out to our group last year, he examined one device – that had no vulnerabilities reported for it in the NVD – that very likely had at least 40,000 unpatched vulnerabilities), it was unrealistic to expect the manufacturer to be able to quickly patch all the vulnerabilities in the device; they will need to figure out a way to triage the vulnerabilities (using perhaps EPSS score or KEV ranking) and patch the most important ones first.

But he did point out that some device manufacturers have an alternative strategy for reducing the number of vulnerabilities they’ve found in their devices, without having to patch them at all. He showed us a message his company had received when they tried to follow up on a previous notification to a manufacturer about a serious vulnerability they’d identified in one of their devices. It read, “Your message to security@(manufacturername).com has been blocked.”

Yep, that’ll do the job, too. And it’s a lot cheaper than patching. 😊

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.