My most recent post quoted Tony Turner saying (in response to a previous post of mine) that SBOMs aren’t being used much by non-developers (that is, organizations whose primary business isn’t developing software), because non-developers are already overwhelmed with vulnerabilities that they’ve learned about without having SBOMs. If they can’t deal with those vulnerabilities, what good does it do them to learn about a potentially much larger set of vulnerabilities that are due to components of the products they use – which they’ll only learn about if they have SBOMs?
I concluded that post by writing:
What’s the solution to this
problem? Obviously, it’s certainly not a great situation if software users
ignore an entire class of vulnerabilities – those that are due to third-party
components included in software products they use – simply because they’re too
busy handling another (perhaps smaller) class of vulnerabilities, namely those
that are due to the “first-party” code written by the suppliers of the
software.
The best situation would be if
users could take a holistic view of all vulnerabilities,
including both those due to first-party code (which they learn about through
vulnerability notifications from the supplier or from looking up the product in
the NVD) and those due to third-party components (which they learn about from
receiving SBOMs and finding component vulnerabilities in the NVD). They would
then allocate their limited time and resources to identifying the most
important vulnerabilities of either type and mitigating those. They wouldn’t
feel they have to ignore half of the vulnerabilities in their software, because
they don’t even have time to learn about them.
So, the question becomes how
software users can prioritize their time addressing vulnerabilities in order
that they mitigate the maximum amount of software risk possible. The answer
needs to take into account the fact that they don’t have unlimited time or
unlimited funds available for that effort.
I know some people will answer this
by saying, “That’s simple. The user should just find the CVSS score for each
exploitable vulnerability and rank the vulnerabilities based on their scores.
Then they should start by mitigating the vulnerabilities with the highest
scores. When they have exhausted their vulnerability management budget for the
year, they should stop and do something else.”
But I also know other people who
will say that CVSS score is an almost meaningless number, so it should never be
used to prioritize vulnerabilities to mitigate. If so, what’s the solution? Is
it another score like EPSS? Is
it no score at all, but a different way of ranking software vulnerability risk?
I honestly don’t know. I’d love to
hear your ideas.
To summarize what I said, it’s a shame if an organization decides to shut out a potential source of vulnerability information (software bills of materials, or SBOMs) simply because they already know about too many vulnerabilities. This is something like an army deciding they don’t need to conduct intelligence activities anymore, since they already face too many threats for them to deal with easily.
What both the army and the organization
need to do is learn about all the vulnerabilities and threats that they face,
then figure out how to prioritize their response to them. In responding, they need
to use their limited resources in a way that will result in the maximum possible
amount of risk being mitigated, given their available resources for responding
to the vulnerabilities and threats. It’s likely that the majority of
organizations, or at least the majority of organizations who try to prioritize
their responses to vulnerabilities in a rational way, will base that response in
whole or in part on CVSS score.
Partly in response to my request
for ideas, Walter Haydock, who has written a lot about how to prioritize vulnerabilities
for patching in his excellent blog,
put up this post
on LinkedIn. The post began with an assertion frequently made by Walter: “CVSS
is dead”. I agree with him that CVSS isn’t the be-all-and-end-all vulnerability
measure it was originally purported to be.
Why do I agree with him about
CVSS? Let’s think about what we’re trying to measure here: It’s the degree of
risk posed to an organization by a particular software vulnerability, usually
designated with a CVE number. Risk is composed of likelihood and impact. CVSS
is calculated
using four “exploitability metrics”: Attack vector, Attack complexity, Privileges
required, and User interaction required, along with three “impact metrics”: Confidentiality
impact, Integrity impact and Availability impact.
However, all of these seven
metrics, to varying degrees, will differ depending on the cyber asset that is
attacked. For example, if the asset is an essential part of the energy
management system (EMS) in the control center of an electric utility that runs
the power grid for a major city, any of the three impacts of its being
compromised will be huge. On the other hand, if the cyber asset is used to store
recipes for use in the organization’s kitchen, the impact of being compromised
will be much less.
There are similar considerations
for the exploitability metrics: The EMS asset will be protected with a
formidable array of measures (many due to the NERC CIP requirements for High impact
systems that run the power grid) that are unlikely to be in place for the recipe
server. So, even though in organizations in general, one vulnerability might be
more exploitable than another and thus would have a higher exploitability
metric, in most organizations, the fact that assets of different importance
will have different levels of protection means that these differences will
probably swamp the general differences in exploitability that form the basis
for the metrics.
In other words, I agree that the
CVSS score isn’t very helpful when it comes to prioritizing vulnerability
remediation work in an individual organization; and since the EPSS score, which focuses solely on
exploitability, is unlikely to be any more helpful (for reasons already
provided above), this means I now doubt there is any single metric – or combination
of metrics – that can provide a single organization with definitive guidance on
which vulnerabilities to prioritize in their remediation efforts.
At this point, Dale Peterson,
founder of the S4 conference and without doubt the dean of industrial control
system security, jumped in to point out that vulnerability management efforts should
be prioritized based on “Impact to the cyber asset...” That is, there’s no good
way to prioritize vulnerability remediation efforts based on any consideration
other than the impact on each cyber asset if the vulnerability were successfully
exploited. I asked,
Dale, do you have an idea of how an
organization could assess vulnerabilities across the whole organization (or maybe
separately in IT and OT, since they have different purposes) and prioritize
what they need to address on an organization-wide basis?...
It has to be a risk-based approach,
but CVSS and EPSS obviously aren't adequate in themselves. Like a lot of
people, I've always assumed this is a solvable problem. However, I'm no longer
so sure of that. This would change how I've been thinking about software risk
management.
I was asking Dale how a
vulnerability management approach based on assets would work. He replied:
Yes for the ICS world. ICS-Patch
decision tree approach, see https://dale-peterson.com/wp-content/uploads/2020/10/ICS-Patch-0_1.pdf
I highly recommend you read the
post Dale linked. It provides a detailed methodology for deciding when a patch
should be applied to a cyber asset in an OT environment. Dale also points out
that his methodology derives inspiration from the Software Engineering
Institute/CERT paper
called “Prioritizing Vulnerability Response: A Stakeholder-Specific
Vulnerability Categorization”, which deals with IT assets (he points out how
differences between OT and IT account for the differences between the two
methodologies).
Of course, neither Dale’s
methodology for OT nor CERT’s methodology for IT is easy to implement; both cry
out for automation in any but the simplest environments. But both can clearly
be automated, since they’re based on definite, measurable criteria.
Of course, it would be nice if there were a single measure that would solve everybody’s vulnerability prioritization problems. It would also be nice if every child in the world had their own pony. However, neither is a likely outcome in the world we live in. It’s time to focus on a real-world solution to this problem.
Any opinions expressed in this
blog post are strictly mine and are not necessarily shared by any of the
clients of Tom Alrich LLC. If you would
like to comment on what you have read here, I would love to hear from you.
Please email me at tom@tomalrich.com.
No comments:
Post a Comment