It’s not news that the biggest problem in software vulnerability management today isn’t “zero days” – that is, scary vulnerabilities for which there isn’t a patch yet. In fact, it’s just the opposite: It’s the fact that most medium to large organizations today are overwhelmed with unapplied patches. Moreover, these organizations have come to realize they will probably never get through their current patch backlog, given the rate at which new vulnerabilities are being identified and patches issued for them almost every day.
Thus, the big problem today is patch
prioritization – that is, deciding which patches need to be applied ASAP, which
ones should be applied but aren’t super urgent, and which ones are not
worthwhile applying at all.
Prioritizing patches needs to be
based on risk. In other words, patches that mitigate the most risk should be
applied first, patches that mitigate substantial risk should be applied next,
and patches that mitigate little risk can be ignored (or at least put aside, in
case the day comes when all current patches have been applied and the patching
team is begging for more work 😊).
Of course, security patches
mitigate the risk from vulnerabilities (usually CVEs) that are fixed by the
patch. This means that patches need to be prioritized for application based on how
risky the CVEs are that they fix. Rather than try to determine for themselves how
much risk each new CVE poses, most organizations rely on one or more published
scores for the CVE.
Risk is a combination of
likelihood and impact, which mean respectively the probability that an attacker
will utilize the CVE to attack the organization and the magnitude of damage
that could result if the attacker succeeds. By far the most widely followed measure
of impact (and probably the best) is the CVSS
Base Score.
“Impact” is easy to understand, but what about “likelihood”?
What does it mean to say that a vulnerability is “likely” to cause a negative
impact on an organization? After all, the fact that a vulnerability is present
in a software product I use doesn’t directly harm me. It’s only when an
attacker “exploits” that vulnerability is it possible that I will suffer harm. This
is why the likelihood of an organization being impacted by a software
vulnerability is usually referred to as the “exploitability” of the
vulnerability.
Today, there are two measures of exploitability of a
vulnerability. The first and most widely followed is CISA’s Known
Exploited Vulnerabilities (KEV) catalog. It is a list of (currently) around
1300 vulnerabilities[i]
that are known to have been actively exploited “in the wild” (i.e., not as part
of a controlled experiment).[ii]
The fact that a vulnerability has been exploited in the past (even if it isn’t
being exploited currently) is a good indication that it might continue to be
exploited. In other words, the hackers already know how to find and utilize the
vulnerability, so it’s likely they’ll continue to do so. After all, there are a
lot of software users that never apply patches. They’re just waiting for the hackers
to take them to the cleaners.
KEV is a good measure of
exploitability, but the fact that it only includes the 1300 vulnerabilities
that have been exploited, and says nothing about the approximately 290,000 CVEs
that are not known to have been exploited, makes it not hugely useful in
prioritizing patches for application. Of course, any patch that fixes a CVE on
the KEV list should be applied as soon as possible. But since only a small
fraction of the CVEs that are fixed by patches in your backlog are on the KEV
list, what can you learn about the exploitability of all the other CVEs?
This is where the EPSS score comes in. EPSS stands for “Exploit
Prediction Scoring System”. It was developed and is maintained by FIRST, the Forum of Incident Response and
Security Teams (FIRST also maintains CVSS).
EPSS is quite different from KEV
and even from CVSS, in that its primary goal is not to describe the present but
to predict the future. It provides a score between 0 and 1.0, which estimates
the probability that a given CVE will be exploited in the wild in the 30 days
following publication of the record of the vulnerability. The EPSS score of every
CVE ever reported (the total in March 2025 is about 280,000) is updated daily.
EPSS is 100% data driven. It is created (and constantly
updated) by a) gathering data on many different indicators of exploitation, b) including
those variables in a mathematical model, and c) updating the model’s weights so
it “predicts” recent experience as closely as possible. EPSS scores change
daily. Thus, it is important to check them regularly (the current scores can be
automatically retrieved from the EPSS website at any time).
Due to this purely mathematical approach, there is no
causality in the model; it simply includes correlations. Some of the variables
that are tracked are:
1.
Vendor of the affected product
2.
Age of the vulnerability (i.e., days since the
CVE was published)
3.
Number of times the CVE has been mentioned on a
list or website
4.
Whether there is publicly available exploit code
The EPSS team regularly points out that the scores have no
intrinsic meaning on their own; they only have relative meaning when compared
with other probabilities. That is, if the EPSS score for a CVE is .2, the only
firm lesson that can be drawn from that fact is that the CVE is more likely to
be exploited than for example a CVE with a .1 score, and it is less likely to
be exploited than a CVE with a .3 score.
Remember, we’re trying to prioritize
patches for application by our organization. To do that, we need first to
compare the degree of risk posed by the CVEs that are fixed by those patches. We
need to determine a risk score for each CVE, but to do that, we need to score
the likelihood that the vulnerability will be exploited in our environment, as
well as the impact if it is successfully exploited. We have already decided to
use CVSS Base Score to measure impact, but now we’re trying to get a “likelihood
score” – which is equivalent to an “exploitability score”.
We have two measures of
exploitability of a CVE: whether the CVE is present in the KEV catalog, and its
current EPSS score. Which should we use? While KEV is currently the gold
standard of exploitability measures, the fact that EPSS is such a different
measure from KEV, and that it scores so many more vulnerabilities, means it is
a good idea to use both measures.
How can you utilize both KEV presence
and EPSS score in patch prioritization? Since the fact that a vulnerability is
being actively exploited outweighs all other measures of risk, I recommend that
you move any patch that fixes a CVE that is found in the current KEV catalog to
the top of the prioritization list; this means the patch should be applied as
soon as possible. After doing that, you could prioritize patches based on both
their CVSS Base Score and their current EPSS score.
I have another point to make about
exploitability, but since what I’ve already written might take some digesting, I’ll
stop here. This was the Exploitability 101 course. Look for Exploitability 102,
coming soon to a blog near you.
If you would like to comment on
what you have read here, I would love to hear from you. Please email me
at tom@tomalrich.com.
My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.
[i] The
vulnerability management services provider VulnCheck has their own KEV catalog, which contains 2-3 times the
number of vulnerabilities in CISA’s catalog (VulnCheck’s catalog contains all
of the entries in CISA’s catalog).
[ii]
Even though a vulnerability has been actively exploited, this doesn’t mean the
attacker was successful in causing some sort of harm to the organization (e.g.,
stealing some of their data). It just means the attacker was able to reach the
vulnerable code and exercise it in some way.
No comments:
Post a Comment