A couple of days ago, the panel
I’ll be speaking on as part of this year’s RSA Conference taped (if you’ll
excuse my use of such an antiquated verb. Should I say “videoed”?) our session,
for replay during our designated time on May 20. I want to note that, even
though we recorded about 35 minutes of discussion, there will still be plenty
of time for questions, through two means:
1.
People
listening to the first session will be able to submit questions via chat, and
panel members will answer them the same way (since we won’t be speaking at that
time). I believe there will be a little time right after the recorded
presentation to read some of the Q’s and A’s.
2.
As I mentioned
in the post linked above, there will be an “Additional Audience Engagement”
session immediately after our session (running another 40 minutes). I don’t
know exactly what this entails, but it sounds like it might be live Q&A,
with perhaps the questioners speaking and being seen (if desired) in real time.
In any case, it should be an opportunity to go into a lot more detail on SBOMs
and DBOM – and supply chain security in general - since the 35 minutes we recorded
couldn’t do very much of that.
For the recording, all three
panelists prepared short discussions related to our particular topic (mine is
SBOMs, of course). I prepared five of these, but there was only time for four
of them. Since the fifth topic is quite important, I’d like to share now what I
prepared.
This topic is one I wrote
about in the fall: the potential problems raised by the fact that the
majority of vulnerabilities that might be found in any component of a piece of
software (that is, when the component is considered as a standalone product)
aren’t in fact exploitable when the component is incorporated into a particular
software package. Fortunately, the problem is being rapidly addressed by the Software
Transparency Initiative “multistakeholder process” run by the National
Technology and Information Administration.
The post I wrote in the fall on
this topic was fairly long and somewhat meandering (what else is new, you ask?),
whereas this week I prepared a very concise (for me) and focused discussion. It
probably took longer to write than the post in the fall, but that just goes to
prove the validity of the old adage “I didn’t have time to write you a short
letter, so I wrote a long one.”
Here’s the question I was
answering:
It's agreed that the majority of
vulnerabilities in software components aren't in fact exploitable in the final
product. However, if end users hear about these vulnerabilities, they'll
inundate the supplier with support calls about them, even though there's
nothing for them to worry about. Suppliers are very concerned about this. How
can this be avoided?
This is a problem that needs to be
addressed for SBOMs to be widely used. But I’m also pleased to say that the
problem is well on its way to being solved through the NTIA effort.
Here’s the problem: One study in
2020 said the average software product contains 135 components. Let’s say that
each component has an X percent chance of developing a vulnerability during a
particular year, and the product itself has an X percent chance of doing so
(and by “the product itself”, I mean the code that was written by the company
whose name is on the software you buy, not by the developer of one of its
components).
Since the product contains 135
components, this means it’s 135 times more likely that a component will develop
a vulnerability than that the software itself will. Or in other words, if you don’t have an SBOM
for the product, you can only learn about 1/136th of the total vulnerabilities
that could be identified in the product or its components – meaning you really
don’t know very much about the vulnerabilities found in that product at all.
This sounds pretty scary, right?
It does, but the good news is that a large percentage (probably more than half)
of vulnerabilities that exist in a component as it stands alone aren’t actually
exploitable when that component is included in a particular product. In other
words, if a hacker tried to attack the product using one of these
non-exploitable vulnerabilities, they would never succeed.
Why does this happen? There are
many possible reasons. A simple one to understand is if the component in
question consists of a library with ten separate modules. The developer of the
product just included one of those ten modules in the product, because that’s
all they needed. However, the SBOM they put out lists the entire component as
being included, since there’s no way to indicate that just one module is used.
This means that if the vulnerability is in one of the other nine modules, it
obviously won’t be exploitable in this product.
But this also means that a user of
the product might notice this component in the SBOM and look it up in the
National Vulnerability Database. They’ll find a CVE for the component and
they’ll immediately worry that this makes the product itself vulnerable to that
CVE. The user will call the supplier’s help line and demand to know when this
vulnerability will be fixed. At that point, they’ll be told that the
vulnerability is actually in a different module than the one implemented in the
product – so there’s nothing to worry about.
No harm, no foul? Not exactly. If
the supplier has hundreds of people calling in with the same question, they
have a problem. Multiply that by all the other component vulnerabilities which
are non-exploitable, and the fact that one developer can sell hundreds or
thousands of products, and the developer now has a massive problem. In fact,
the Director of Product Security for one major software developer, which sells
more than a thousand products, estimated in an NTIA meeting recently that
solving this problem would save them literally thousands of unnecessary support
calls every month.
This is one reason software
developers give for being reluctant to start distributing SBOMs. And they’re
absolutely right to be worried about this. But there’s another problem that
would show up on the user side. The user in our example might not call the
supplier, but they might simply start scanning for that CVE in the product
they’ve installed.
Of course, the scans will turn up
negative. In fact, the majority of scans this user conducts for component
vulnerabilities will turn up negative. At some point, this user will become
quite frustrated, and they’ll probably utter some choice words about what the
supplier can do with their SBOMs. At that point, they’ll vow never to look at
another SBOM again.
In other words, both software
suppliers and software users need to have this problem addressed, if they’re
going to distribute or use SBOMs successfully.
Fortunately, the solution is
conceptually quite simple: There needs to be a mechanism so that the supplier
can notify all of their users whenever a situation like this occurs: that is,
whenever a CVE is identified for a component of one of their products, yet that
vulnerability isn’t in fact exploitable in the product itself. This
notification needs to be distributed through the same channels as the SBOM, to
make sure it reaches everyone who needs it. This should solve – or at least
greatly mitigate – both the supplier and the user problems.
I’m pleased to say that one of the
working groups within the NTIA Software Transparency Initiative – which by the
way means the group of industry volunteers working on SBOMs under NTIA’s
auspices – is well on the way to solving this problem. They’re developing
another machine readable document in addition to the SBOM itself. This document
is now called a VEX, standing for “vulnerability exploitability”. Suppliers
will send the VEX to the same people and organizations that receive their
SBOMs. The VEX will have multiple uses, but perhaps the most important one will
be simply stating – in a machine-readable format – that even though the product
contains Component A and Component A is listed in the NVD as vulnerable to CVE
B, CVE B isn’t in fact exploitable in their product – usually because of how
the supplier has implemented component A (there will be room for the supplier
to enter a textual explanation of why this is the case).
Of course, perhaps the most important
issue will be making sure that the VEX documents are properly incorporated into
the user’s vulnerability management process, so that they don’t either a) keep
calling the supplier about unexploitable vulnerabilities, or b) keep scanning
for unexploitable vulnerabilities and getting frustrated when nothing shows up.
This will require automation on
the user side. I anticipate that third-party services will spring up to address
this problem, as well as the general problem of making SBOMs easily usable for
vulnerability management (and other purposes). I also anticipate that suppliers
of software tools that will “ingest” SBOMs for their own purposes – such as
vulnerability management and configuration management tools - will make sure
their software can ingest VEX documents as well.
I can state with great confidence
that this problem will be “solved”, at least in principle, in 6-9 months. There
are no technical issues involved here, just procedural choices to be made. The VEX
subgroup working on this problem is considering both the format and the necessary
fields, as well as best practices for creating and distributing VEXes.
And – since the NTIA approach is
to “build the car while driving it” – their ideas will be tested and validated
(or not) by the ongoing Proofs of Concept for particular industries. The Healthcare
Proof of Concept – which has been testing SBOM production and use since 2018 –
will soon start testing production and use of VEXes as well. I also hope the Energy
Proof of Concept – due to start perhaps in May – will test VEX production
and use, although perhaps not from the outset.
Any opinions expressed in this
blog post are strictly mine and are not necessarily shared by any of the
clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would
love to hear from you. Please email me at tom@tomalrich.com.
This is a comment that Kevin Perry emailed me on 4/3. I agree with it. This is perhaps the most likely reason why a vulnerability in a component considered on its own isn't exploitable when that component is incorporated into a software product.
ReplyDeleteOne of the reasons a vulnerable component is likely not a concern is that its entry point is not generally externally exploitable. To explain… Many software vulnerabilities require a specially crafted input sequence, either input data or calling parameters, to trigger the exploit. A well written program that calls a particular routine can be expected to invoke the component software properly and thus will not trigger the vulnerability. To be exploited, the vulnerable software component would have to somehow be externally reachable (such as raising a listening port on the network or requesting input of some sort) or the calling program would have to be manipulated in such a way as to improperly invoke the component.