When software bills of materials
(SBOMs) started being regularly used for vulnerability management purposes (I
don’t believe there’s a historical marker for the day that this happened, but
I’d nominate the day when Steve Springett posted the first version of Dependency-Track in 2012. This was based on the concept of a BOM, but this was before
the term SBOM was even coined), this entailed changes in how organizations thought
about software supply chain risks – if they were even thinking about them at
that time (most probably weren’t).
The big change was that there was
now a distinction between a vulnerability being present in a software product
and its being exploitable in the product; previously, both words would
have been considered to have the same meaning. However, once an organization
could learn about vulnerabilities applicable to components in their software
products (and the first users of Dependency-Track were developers. In fact, the
great majority of users today are probably still developers), they discovered
that many vulnerabilities that were applicable to a component, when it was
considered as a standalone product, were not applicable after the component had
been installed in their product; in fact, this was true for the great majority
of component vulnerabilities.
The word that was used to describe
the difference between the two cases was exploitable. If a vulnerability
is exploitable in a product, this means it is not only physically present in
the product, but it can be attacked by a hacker who is up to no good. If a
vulnerability is physically present in a component but can’t be successfully
attacked in the product itself, this means it isn’t exploitable in the product.
Of course, the VEX format was developed for precisely this purpose: for a
supplier to notify their customers that, even though a component in one of
their products (which is listed in an SBOM) is noted as subject to a particular
vulnerability in the National Vulnerability Database (NVD), the product itself
isn’t subject to this vulnerability. Therefore, the customer shouldn’t waste time
looking for the vulnerability – and BTW, they shouldn’t tie up the supplier’s
help lines asking about the vulnerability.
So, one of the supplier’s jobs is
to continually search the NVD (and other vulnerability databases) for
vulnerabilities in components they have installed in their products. They need
to determine whether or not each component vulnerability is exploitable in the
product itself, usually because of how it was incorporated into the product.
When the supplier discovers that
one of these vulnerabilities isn’t exploitable, they should issue a VEX
document stating that fact. The tooling on the user’s end (or at a third-party
service provider that performs this service for the user) should maintain a
list of exploitable vulnerabilities in each software product and version they utilize,
and remove from this list any non-exploitable vulnerabilities. This is
important, since probably over 90% of component vulnerabilities aren’t
exploitable in the full product. Having an up-to-date list of only exploitable
vulnerabilities will save staff members a lot of time wasted in searching for
the non-exploitable ones, as well as calling the supplier to ask when they’ll
be fixed.
However, I believe this
determination of exploitability almost always must be made by the supplier. The
supplier, or someone that knows how the product was put together, is the only entity
that can reliably state whether or not a vulnerability is exploitable in their
product. They wrote all the first-party code and installed all of the
components (which nowadays make up about 90% of the code in the average
software product). They can make judgments like, “There is no way that an
attacker could ever reach this vulnerability to exploit it.” I don’t believe
that, in most cases, any other entity can reliably make such a statement, even
if they can review the source code[i]. I can attest that the
CISA (formerly NTIA) VEX committee has always worked under the assumption that
VEX documents will almost always be issued by the supplier of the product.
Yet, this will clearly not always
be the case. For example, open source projects are staffed by volunteers who
consider themselves coders. Yes, they should be concerned about vulnerabilities
that turn out to be exploitable in the product they’ve developed, but I think
it’s too much to expect them to spend a lot of time figuring out whether
vulnerabilities are not exploitable. I’m not expecting too many open source
communities to put out VEX documents of their own accord.
What about commercial products? A
commercial supplier should in theory want to put out VEXes regularly, since a
user who learns from a VEX that a vulnerability isn’t exploitable in a product
they use won’t feel they need to call the supplier’s help desk with that
question. In fact, the development of the VEX format was sparked by two very
large suppliers who were concerned about this problem; one estimated they’d get
literally a couple thousand unnecessary calls every month if they just put out
SBOMS, without also putting out VEXes.
However, I’m also sure that a lot
of commercial suppliers won’t put out VEXes for their products, even if they do
put out SBOMs. Perhaps they won’t understand why VEX is important, or maybe
they simply won’t want to invest the time required to learn about the format. This
means there will definitely be a need for third parties to develop VEXes for
both commercial and open source products.
However, I also believe that the
VEXes produced by these third parties will be fundamentally different from
those produced by the product suppliers themselves. This is because the third
parties, no matter how expert, don’t understand exactly how a product (open
source or commercial) was developed.
Does this mean we’re all SOL, when
it comes to securing open source products and commercial software products
whose suppliers don’t feel like producing VEXes? No, because there are two
kinds of exploitability. The first is the absolute kind, which is what the
supplier of the product can tell you about in a VEX. Only the supplier can make
a categorical up-or-down statement whether or not a vulnerability is
exploitable in a product.
But it’s still possible to make
statements about exploitability that don’t have the categorical quality of the
supplier’s statement. A good example is this excellent blog post by Walter Haydock. It’s aimed mostly
at software developers rather than end users, so it might justifiably be
considered overkill for the latter, especially given the large numbers of
vulnerabilities they might identify when they start receiving SBOMs for most of
the software that they operate.
The post lists five recommendations
for determining exploitability of a vulnerability in a software product,
including the new Exploit Prediction
Scoring System (EPSS) scores, which are all about exploitability. These are
all good recommendations, but it’s important to realize that now, we’re talking
about the second type of exploitability. It doesn’t state whether a
vulnerability can be exploited at all, but the likelihood that it will
be. While knowing likelihood isn’t as good as a categorical black-or-white
statement that a vulnerability isn’t exploitable, it’s certainly better than having
no idea about whether a vulnerability is exploitable or not.
However, the difference between
the two types of exploitability can be seen best in what they can be used for.
Type 1 exploitability is used to determine whether or not the organization
should make any effort at all to find and mitigate a particular vulnerability.
If it’s exploitable and it’s a serious vulnerability, the organization should
probably move heaven and earth (well, earth, anyway) to get it patched,
including hounding the supplier day and night to develop a patch. And if a
vulnerability isn’t exploitable, the organization doesn’t need to do anything
at all.
But Type II exploitability is a
question of prioritization of effort. If Vulnerability A is deemed more
exploitable than Vulnerability B, then A should take precedence in the effort. As
long as the organization goes after the vulnerabilities with the highest
exploitability scores first, they don’t need to feel too bad if they don’t have
time to mitigate the lower-scored vulnerabilities.
Thus, both types of exploitability
are important. It’s always better to know whether a vulnerability is
exploitable in the Type I sense or not, but in cases (like open source
software) in which the best you’ll be able to get is an estimate of Type II
exploitability, then by all means, look at that. And if, for example, an open
source product has a serious vulnerability that also appears to be highly
exploitable, you should certainly prioritize mitigating that vulnerability over
mitigating a vulnerability that is Type 1 exploitable, yet still seems to be
less serious (due to CVSS score, etc.).
One question you might have (I
know I have it) is whether third parties that make statements about Type II
exploitability should do so using VEX documents. In general, I don’t think they
should, since the VEX format as of now is based entirely on Type 1
exploitability. The third party would have to assert that the vulnerability is
exploitable or not; they wouldn’t be able to state the probability.
However, I’m also not against
having a third party assert that a vulnerability is exploitable or not in a
product, especially if the supplier hasn’t issued a VEX for this product at
all. But the third party should utilize the “authors” field (in the CycloneDX
VEX format) to identify themselves; they should also fill in the “email” field
under “authors”, so users can get in touch with them if they have questions.
Users will need to decide for themselves whether or not to believe the third
party’s statement that the vulnerability in question is either completely exploitable
or not exploitable at all in the product.
P.S. Speaking of exploitability, I want to raise one important
point: Lots of vulnerabilities may not be exploitable in the Type 1 sense, when
in fact they could be exploited if an attack started with another
vulnerability, then moved on to the original one; this is known as a “chained
vulnerabilities” attack. Does the fact that a seemingly non-exploitable
vulnerability could in fact be exploitable in a chained attack mean we should call
it exploitable?
There’s no right or wrong answer
to that question, since it depends on how you define “exploitable”. However, I
do know that the VEX concept, as it was developed in the last two years, only
applies to vulnerabilities that are directly exploitable – i.e. not as part of
a chained attack. The thinking is that, if you allow for chained attacks, then
just about every vulnerability becomes exploitable in one way or another. It
would be hard to prescribe any one mitigation, given the huge number of ways a
chained attack could happen. Given that another vulnerability is the source of
the chained attack, that vulnerability should be mitigated on its own.
I’ll admit that some security
professionals don’t agree with me on this. They think it’s better to treat any
vulnerability as exploitable, even if that only occurs through a chained attack.
Do you want to know how I feel about this?....I didn’t think so, but I’ll tell
you anyway: When SBOMs are widely available and regularly updated, end users
will learn about lots of exploitable vulnerabilities, in software products they
use every day; they probably wouldn’t have learned about all of these vulnerabilities
otherwise. If they can handle that workload and want to learn about the exploitability
of chained vulnerabilities as well, then they should do so.
Also, current users with high
assurance use cases (e.g. the military) may feel it’s important to learn about
chained vulnerability attacks as well. Go for it! But let’s not require that
everybody else do the same.
Any opinions expressed in this
blog post are strictly mine and are not necessarily shared by any of the
clients of Tom Alrich LLC. If you would
like to comment on what you have read here, I would love to hear from you.
Please email me at tom@tomalrich.com.
[i] There
are some cases in which a third party could determine that a component
vulnerability isn’t exploitable in the product itself, such as when the
vulnerable module of a library has not been included with the product binaries.