My brother Bill Alrich sent me this
link this week, regarding a presentation at Black Hat last week. I’ll let you
read it, but I find it appalling. The gist of it is that the quality of security
patches is declining, so that more and more of them are bypassed. This means
the supplier has to re-do the patch (and of course, their users have to apply
the new patch), because the original patch didn’t really address the whole
problem it was meant to address.
Why is this happening? It seems
that suppliers, pressed for time and resources, are cutting corners by not spending
the time required to look for the root cause of the bug and patching that. So a
researcher (or an attacker, of course) can easily go around the patch, because
they do have the resources to look for the root cause. Of course, when
this happens, the supplier probably ends up spending more time on the
vulnerability than if they’d done it right the first time, since they have to
create two patches, not just one. It’s better to do a good job the first time…
But that’s easier said than done;
I realize suppliers are under pressure from all sides. One thing that can help
is they can consult with their customers and try to draw clear lines regarding
which vulnerabilities are worth patching and which aren’t; then they shouldn’t
be afraid to tell their customers (in a VEX, or just in an email), “This
vulnerability doesn’t meet the severity threshold we agreed on, so we won’t
patch it. Here are several steps you can take to mitigate the risk posed by
this vulnerability…”
Of course, the problem is
determining what that threshold should be. I’d like to say it should be a CVSS
score above say 4.0; however, there are a lot of problems with CVSS scores in
general, since they’re based on both exploitability and impact – but impact is
very much dependent on the software impacted and how it’s used. Is it used to
run the office’s March Madness pool? That’s probably a low impact use. But, how
about if the software is used to run a process that could kill people if it’s
misused? That’s a high impact use. Yet, the CVSS score would be the same in
both cases.
Exploitability is a different
story. The more exploitable the vulnerability, the bigger risk it poses (since
its likelihood of exploitation increases, no matter how it’s used). Here, you
have to remember that there are two types of exploitability, as discussed in this
post. One is the exploitability that is described in a VEX document; it’s an
absolute exploitability, based solely on the characteristics of the product
itself. Either the vulnerability can be exploited or it can’t.
While some customers might object,
if the supplier issues a VEX that says the status of the vulnerability – in one
or more versions of the product – is “not affected” in a CSAF VEX or “unexploitable”
in a CycloneDX VEX, IMO they’re justified in saying they won’t patch the
vulnerability (although the supplier might offer a “high assurance” patching option
for customers like military contractors or hospitals that want most
vulnerabilities to be patched, period).
The other type of exploitability
is what’s found in the EPSS score, discussed
eloquently by Walter Haydock. This looks at such factors as the availability of
exploit code and whether there have been exploit attempts. Of course, these have
nothing to do with the product itself, but are applicable to all products and
users. So a supplier might have a discussion with their users about a) whether
EPSS score is good enough for the purpose or whether the users should construct
their own score using some other combination of factors, and if so, b) what
would be an appropriate threshold level.
This is why it’s dangerous to
require software suppliers – especially in contract language – to patch all vulnerabilities.
Sure they’ll develop a patch, but if someone is going to figure out how to bypass
it in a week, was it really worth developing in the first place?
PS: Walter Haydock posted the following comment on this post in LinkedIn:
Thanks for the shout out. I would still say that exploitability can be described in a probabilistic fashion (e.g. a number 0-1) in every case.
You allude to it in your article, but certain organizations might want to be “extra sure” something a certain product is not_affected by a vulnerability. Unless they have zero legitimate reason for this ask, then I think it’s fair to consider that a vendor VEX statesmen shouldn’t be taken as a completely binary value.
I replied:
Thanks, Walter. The VEX spec has no provision for anything but a binary designation. However, there's certainly no reason that anyone has to place absolute faith in the supplier's affected/not affected designation.
This is why both VEX specs include a set of machine-readable "justifications" for different exploitability use cases. The CDX VEX justifications are:
"code_not_present"
"code_not_reachable"
"requires_configuration"
"requires_dependency"
"requires_environment"
"protected_by_compiler"
"protected_at_runtime"
"protected_at_perimeter"
"protected_by_mitigating_control"
Let's say someone (like a certain large hospital organization I know of) doesn't trust the "code_not_reachable" justification. They could set their tool so that it would treat a "not_affected" status with that justification as the same as "affected".
Of course, if they don't trust the supplier as far as they could throw them, they might not believe anything they say. At that point, you have to ask, "Why the heck are you buying from them, anyway?" But it's still a binary designation, not a probability.
I was going to include this discussion in the post, but I got lazy and didn't. I should have known that you would call me out on this.
PPS: Dale Peterson put this comment on the post in LinkedIn:
I think it is worse than your headline (and many will only read the headline). It is often deception rather than a lack of skill.
Some vendor's patch is changing enough so the poc exploit doesn't work. Not fixing the root cause. Back when we did vuln finding/exploit writing we saw this, and it was often trivial to change the exploit so it would work again.
Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.
No comments:
Post a Comment