Regular readers of this blog (both of you!) know that for more than a year I have been concerned about the “showstopper” problems that are inhibiting wide (or even narrow) distribution and utilization of SBOMs by organizations whose primary function is not software development (you know, banks, restaurant chains, baseball teams, police departments, architecture firms…the rest of us).
I say “non-developers”, because
there are some developers who are using SBOMs heavily
now to manage risks in the products they’re developing. However, what is
strange is that I honestly can’t identify a single developer that is regularly
providing SBOMs to their customers (since an SBOM can no longer be trusted once
the software has been upgraded or even patched, the developer needs to provide
a new SBOM with at least every major and minor update of their product) –
although I’ll admit I have no visibility into what military and intelligence
agency suppliers are doing.
What’s holding SBOMs back? One is
the naming
problem. An informal group I lead, the SBOM Forum – which is, I’m happy to
say, now the OWASP SBOM Forum[i]! – developed a paper
almost exactly a year ago that describes how to significantly mitigate this problem
in the NVD. The NIST team that runs the NVD has told us they would like to
implement what we describe in the paper, but they obviously have a lot on their
plate nowadays, such as wondering whether they’ll have a paycheck next week
(for the second time this year, I might add).
Thus, while we will certainly
engage with the NVD whenever they’re ready and able to do that, the OWASP SBOM
Forum isn’t losing sight of the real goal: The US and the rest of the world need
a truly global
vulnerability database, which is maintained and funded
internationally.
We’re now talking with people at
ENISA (the EU cyber agency) about their potentially using our recommendations
for NIST as the basic design for the vulnerability database they’re developing
from scratch – and which they hope to have up and running in two years (this is
eminently doable, in my opinion). The project is already funded (although probably
not on the scale required for a global database) by Article 12 of the EU NIS 2
legislation, which came into effect last year.
Thus, we’re making progress on the
naming problem. But, until this week, we hadn’t even started to address the second
serious showstopper problem for SBOMs: the fact that, three years after the VEX
idea was first discussed in the NTIA SBOM initiative, there is still no clarity
on what VEX is and how tools for VEX production and consumption should work. Therefore,
VEX is literally going nowhere at the moment.
However, there is clarity that
SBOMs will never go very far until VEX is made to work. This is because
suppliers have seen the statistics stating that over 90% of component
vulnerabilities in a software product aren’t exploitable in the product itself.
The suppliers are worried that, the day after they release their first SBOM,
their help desk will be overwhelmed with calls and emails from angry customers
demanding to know when they will patch CVE-2023-00666, which has a CVSS score
of 10.0 and will probably get them fired if they don’t get it fixed by…how
about 2PM this afternoon?
Of course, the help desk people
will patiently explain to caller after caller that they don’t need to worry
about this vulnerability because the vulnerable module in the library that
contains the vulnerability was never used in their product. But maybe, after
they reach the 70th such caller in one morning, they will begin to
lose their patience…and by the end of the day they’ll submit their resignations.
Yet, I don’t know of a single
supplier that has started to produce VEXes on anything but an experimental
basis. Cisco recently announced
they will start producing VEXes for customers (not for public viewing), but
didn’t say how often they will be updated.
While it’s nice to have one-off
experiments, VEXes will be needed even more frequently than SBOMs, since new
vulnerabilities appear all the time and customers need to know about them in as
close to real time as possible (which is why I’ve proposed a VEX server
and API. These will eliminate, or at least greatly reduce, the need for VEX
documents, as well as take a huge burden off the suppliers). It’s not at all
unreasonable to expect VEXes to be updated or issued multiple times a day.
But even among the very small
number of experimental VEX documents that have been produced and published,
there is no consistency regarding what is covered in a VEX, or even the format
for it. This isn’t surprising, given that there are two “platforms” on which VEX
can be produced now - CycloneDX
(CDX) and CSAF
– and in neither one of them has a format for VEX been specified. This has led
to wildly differing documents, all bearing the name “VEX”. Given that there is
no consistency in the VEX documents being produced, is it any surprise that there
are literally no tools available to ingest and utilize VEXes? Yes, there are plenty
of tools that can read the JSON of a VEX and make it look a little prettier for
the reader, but why should a supplier produce a machine-readable VEX document
if it’s just going to be read by humans? They could save a lot of time (and
some money) by putting what they have in a PDF and emailing that to their
customers.
Last week, the OWASP SBOM Forum
decided it was time for us to lead development of VEX “playbooks” – one for
CycloneDX VEX and the other for CSAF VEX (the two platforms are so different
that there could never be a single VEX playbook that will encompass both
platforms). Each platform will have a Producer’s Playbook and a Consumer’s Playbook.
These playbooks will be intended
mainly to describe to the toolmaker how VEX documents will be produced and utilized.
They will be constructed with enough rigor and detail that there should be no question
about what needs to be in the VEX and how it should be presented. Since we will
be creating the producer and consumer playbooks at the same time, we will
hopefully avoid the common pitfall of not having the production and consumption
tool specs exactly in sync.
The two platforms will require very
different levels of effort. CycloneDX VEX is much easier to understand than CSAF
VEX, since it is built on the same platform that CDX SBOMs utilize (as well as CDX
HBOMs, SaaSBOMs, MLBOMs, OBOMs, etc. See the CDX site for the full list of
document types that are built on the same framework). There are already plenty
of tools that can both create and read CDX documents; the problem is those
tools need to be constrained so they only produce and consume VEX documents,
rather than making the supplier figure out for itself how to create a VEX in
CDX format.[ii]
However, understanding CSAF is
much harder, since the spec
is about 80-100 pages long and published in small type. If you look at the VEX profile
in CSAF, it seems simple, since it just lists a small number of fields.
However, it leaves out two mandatory fields (required in all CSAF documents)
that are by far the hardest to understand in CSAF: “product tree” and “branches”.
The description of these goes on for about 10-15 pages in the spec, and even then,
it’s not likely that organizations with no experience in CSAF will be able to understand
them quickly.
Fortunately, the group working on
the playbooks will include staff members with long CSAF experience from Red
Hat, Oracle and Schneider Electric, as well as perhaps staff members from
Microsoft, Google and Cisco. Our challenge won’t be to understand CSAF, but to
determine the optimal configuration of the “product tree” and “branches” fields
(these seem to be related to other fields as well, although my understanding of
CSAF is very limited). We may decide that we need to constrain those fields
very tightly, perhaps by constraining the VEX document to only addressing one
product (although it will have to address multiple versions of the product –
and being able to represent and interpret version ranges automatically would be
very desirable). Again, our goal is to have a VEX spec that won’t need much if
any interpretation on the consumption side.
Why am I so worried about
constraining the spec? It’s simple math: a consumption tool will need to be
able to interpret every combination of options that might be thrown at it – and
the number of combinations is the factorial of the number of independent options.
If there are three options, there are 3X2X1=6 possible combinations. If there
are 5 options, there are 5X4X3X2X1 options = 120 combinations.
How about if there are 10 options?
There it gets trickier: there are 3.6 million possible combinations. And how
about 20 options (which may well be the case if you consider the entire CSAF 2
spec)? Not a big deal…then there are only 2.4 quadrillion possible options! My
guess is most developers aren’t going to feel like devoting the next couple of millennia
to writing a CSAF consumption tool unless the options are severely constrained.
I’m telling you this because this
will be a very interesting exercise as well as a very important one, since as I’ve
said many times, “No VEX, no SBOMs.” VEX has to be fixed if we will ever have
widespread distribution and use of SBOMs. Next week our group will organize
itself, and we’ll aim for two weeks from now for our first meeting (I’ll send
out a Doodle poll next week to judge a proper time). I think biweekly meetings
will be fine. In fact, the documents we create – starting with a document on
VEX use cases – will be available on Google Docs for comments and suggested
edits at any time. If you want to join us or at least be on the mailing list,
send me an email.
Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.
[i] However, please give us another two weeks to get our pages set up on both the OWASP and GitHub sites.
[ii]
Currently, the best way to determine how to create a VEX document on either platform
is to go through the 10 or so examples, for each platform, found in the VEX
Use Cases document from last year.
No comments:
Post a Comment