Recently, Dale Peterson interviewed Steve Springett, leader of the OWASP Dependency Track and CycloneDX projects, for his podcast. I have watched a number of podcasts with Steve, and I never come away without a lot of notes; for this podcast, I took six pages’ worth. This was in part because Dale asked some really excellent questions, which showed he had done a lot of research beforehand. Between the two of them, it was quite a show.
I recommend you listen to the
whole podcast, but I’d like to address three topics that came up – among many.
Contractual
challenges to sharing SBOMs
Steve and Dale both agreed early
in the podcast that, while SBOMs are being heavily used by software developers
to learn about and manage vulnerabilities in products they’re developing, they’re
hardly being distributed to or used by end user organizations at all (i.e.,
organizations whose primary business isn’t developing software, which is of
course well over 99% of the organizations on the planet).
One of the reasons Steve pointed
to for this problem (and it is a problem!) was contractual. He didn’t elaborate
on that, but I can guess that, since there are no standards or official
guidelines for producing or distributing SBOMs, any organization that tries to
negotiate with suppliers on the terms under which they’ll provide their SBOMs won’t
find this to be easy at all.
However, I’ve already discovered
the solution to this problem. It’s a lot like the solution the doctor proposed
to his patient who complained that if he performed an unusual arm motion, it hurt.
The doctor’s solution? “Don’t do that.”
And that’s my solution to the
problem of contracts for SBOMs being hard to negotiate: Don’t bother with them.
Given the complete lack of standards or official guidelines for producing or distributing
SBOMs, it’s simply way too early to even consider using contract language to “force”
the supplier to give you exactly the type of SBOM you want, in exactly the way
you want it.
However, let’s say the supplier
gives you the perfect SBOM that you want. What are you going to do with it on
your own? There are currently no low-cost, commercially supported tools that
ingest SBOMs (and VEX documents, although they’re hardly being distributed at
all today) and output a list of component risks for the user, including a list
of exploitable component vulnerabilities. Yet, this is the use case that most
end users have in mind for SBOMs.
This is why I recently suggested
that we forget about using contract language for SBOMs for the time being (except
for perhaps an NDA) and simply focus on mutual learning: the suppliers will
produce the SBOMs they think their users want and their users will tell them whether
and how they’re able to use them. This will be effectively a large-scale proof
of concept and, as with any proof of concept exercise, there will need to be mechanisms
for gathering and aggregating the lessons learned.
I know of only one active SBOM proof
of concept that is exchanging SBOMs today - the healthcare PoC being sponsored
by the Healthcare ISAC. Moreover, that PoC has just around ten participants
(medical device makers and large hospital organizations). Wouldn’t it be great
if we could get a lot more suppliers and users, in many industries, exchanging
SBOMs and learning from them – and then sharing what they’ve learned from the experience?
We can do that if we forget about contract language for now, and simply focus
on what we can learn.
The
only way SBOMs are going to provide value to end users in the near term
After the above discussion, Dale pointed to the one solution
that can lead to SBOMs (or at least the information to be derived from them) being
widely used by non-developers in the next few years. The solution won’t be to have
low-cost, commercially supported tools that every organization can use. I used
to think that those tools would someday magically appear, but they haven’t
appeared yet and I know of none that are even on the horizon.
More importantly, there are serious issues like the “naming
problem” (which Dale brought up later in the podcast) and how to get useful
VEX information out (I plan a post on this problem next week), which are currently
standing in the way of easy-to-use consumer tools. These problems won’t ever be
completely solved, but they can be addressed to a degree that makes usable
consumer tools possible. We’re simply not there yet, and won’t be for years.
However, Dale pointed out that he sees a more realistic path
to SBOMs being usable to consumers in the near term, and that’s third party
service providers; I agree with him 100% on this. The fact is that the tools required
to analyze SBOMs and VEX information, and then report component risks to end
users, are available now. However, they’re almost all not low cost or commercially
supported or both – and that’s what will be needed for true consumer tools. Moreover,
an end user today would need to string together several open source tools (and
address data formatting issues, etc.), to get the required functionality. Few businesses
or government agencies have the technical chops and available time to do this.
However, third party service providers are a different story.
They know that whatever tooling they put in place will allow them to provide
this service to many end user organizations. The economic picture will be very
different for them.
Exactly who will these service providers be? I honestly don’t
know now, but I do know that, given the high and growing interest in learning
about software component vulnerabilities, they will appear. The important technical
problems have all been solved in principle. Even the naming problem has been “solved”
by hundreds or even thousands of software suppliers and their consultants,
since there are lots of ad hoc workarounds available, including AI/ML
routines, fuzzy logic, etc. No non-software suppliers will want to invest in developing
these routines for their own use, but again, the economic picture changes
drastically for service providers who can amortize the cost of doing so over
hundreds or thousands of customers and products.
There’s one important aspect of these service providers that
didn’t come up in the podcast, but is one that I’ve written about once or
twice: It’s clear to me (at least) that end users shouldn’t be responsible for
paying the service providers for their analysis. After all, the supplier is the
one that chose the component and included it in their product. If they chose a
component that poses big risks for end users, why should the end users have to
find this out for themselves? More importantly, if there are 10,000 customers
of a product, why should they each have to pay a service provider to tell them
that CVE-2023-12345 is exploitable in the product they use, and they should
immediately contact the supplier to find out when they’ll patch it?
Instead, why doesn’t the supplier pay the service provider
to do this and distribute the results to their users? I remember when it first
became clear in the 1990s that software vulnerabilities weren’t rare events (as
had been the general opinion), but could be counted on to appear constantly in
just about any product. At first, software suppliers tried to charge their
users for the privilege of receiving patches for the vulnerabilities in their
product – often through saying that only users who pay for maintenance would
receive patches.
Of course, this idea quickly faded, and now I don’t know a
single supplier that doesn’t develop and distribute security patches to their
customers for free. I expect the same thing to happen with SBOM analysis. While
I don’t think most suppliers will want to make the investment in providing the
service I described above for their customers, I think they will gladly pay a
third party to provide that service on their behalf. Especially when all their
competitors start doing it and they realize they won’t be in business much
longer unless they do as well. Just like what happened with security patches.
Side-channel attacks
Toward the end of the podcast, Dale brought up the topic of
VEX. He had heard Steve say on a different podcast that “VEX is a missed
opportunity because it doesn’t represent risk.” He asked what Steve meant by
that.
Steve’s first response was fairly complicated (and led to a
more complicated discussion than I want to address at the moment), but his
second response was very straightforward: VEX considers a single vulnerability
in isolation and provides a binary answer to the question of whether that
vulnerability is exploitable in a particular product (and not just a particular
product, but a particular version of that product).
However, Steve pointed out that hackers often don’t just try
to exploit one vulnerability and then give up if they don’t succeed in compromising
the product using that vulnerability. Instead, they often try to exploit a
different vulnerability. Then, if they’re successful with that attack, they “pivot”
to exploiting the vulnerability they had originally aimed for. This is known as
a “side channel” or “chained” attack. When you consider these attacks, then VEX’s
answer to whether a vulnerability is exploitable in the product or not becomes
more complicated – since you need to consider which side channel attacks might
be used to exploit that vulnerability, not just direct attacks.
This was a question that was discussed in the VEX working
group when it was under NTIA (I don’t think we have discussed it much under
CISA). Our feeling at the time was that, if we started to consider side channel
attacks, then essentially every vulnerability would be exploitable. Why even have
a VEX in that case? Instead of spending their time putting out VEX information,
a supplier would just need to admit they need to patch every component
vulnerability, even though they know that over 90% of them probably aren’t exploitable.
Moreover, end users will also need to admit that, even
though they on average are currently only applying about 15% of the patches
they receive for products installed on their network, they’ll now face a huge
influx of additional patches, which they will have to add to their already voluminous
patching queue. Since we (the VEX working group) didn’t want to provide those
answers either to suppliers or end users, we decided to proceed on the assumption
that only directly exploitable vulnerabilities need to be designated as
exploitable in VEX.
However, enough people (besides Steve) have raised this
issue that I now think we need to admit that there are more sophisticated
hackers out there who know how to use side channel attacks. So, for some users
(especially those in high assurance use cases, like military contractors,
hospitals, and OT in general), it does make sense to patch any component
vulnerability that is known to be present in the product, and for users to
apply those patches.
Fortunately, there are some cases in which even a side channel
attack won’t be able to exploit a particular vulnerability in a product,
meaning that VEX can provide an acceptable solution, even for high assurance
users. I’ll discuss that soon, hopefully next week.
Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.