The EU Cyber Resilience Act was passed about two years ago; there has been a lot of talk about what is or isn’t in it. However, I’ve found that talk frustrating, because the Act itself didn’t have much in the way of details, so the question of what was “in” it was close to meaningless. I understand this is a feature of some EU legislation: An initial act just establishes the agreed-upon purpose to be achieved. The details come in a later “Implementing Act”.
This wasn’t just frustrating for
me, but also for the several ENISA (the EU’s CISA) staff members who are part
of the OWASP SBOM Forum and discussed the CRA at a couple of our meetings last
fall. It seemed they were always sure the details were just a month away - but
they admitted they’d been saying that for two years. I was thinking that when
the details finally arrived, they would just reflect the text of the great play
“Waiting for Godot” (written by an Irishman, Samuel Beckett, originally in
French) – a play that starts and ends with two men waiting for Godot in a
barren landscape. They’re assured every evening that Godot will be there
without fail the next day, but of course he never shows - and they even admit
he will never show. They continue to wait nevertheless.
Fortunately, in the case of the
CRA it seems Godot has finally shown up. After being told a couple of weeks ago
that the document with the details had finally been released (only to hear next
that nobody knew where it was), I was quite happy to hear from Jean-Baptiste
Maillet on LinkedIn (who was commenting on my most recent post)
that the full text of the CRA is now available here. Jan. 17: Christian Tracci pointed out to me on LinkedIn that this text has just been approved by the Council, but still needs to be approved by the European Parliament. However, it is unlikely that it will change much (if any) before that happens.
The document is a brief 189 pages.
It looks like it has a lot of good things in it, which I will read later, but
what I was most interested in – and I imagine a lot of readers will be most
interested in – was Annex 1 on page 164, titled “Essential Cybersecurity
Requirements”. It’s literally only three pages (actually, even less than that),
but is obviously very well thought out.
If you’re a fan of prescriptive
requirements, with specific “Thou shalt” and “Thou shalt not” statements, well…perhaps
you need to look elsewhere than the CRA to get satisfaction. The requirements
in Annex 1 are objectives-based (which I consider to be the equivalent of
risk-based). That is, they require a product to achieve a particular objective,
but the means by which it is achieved are up to the manufacturer. Here are a few
examples (they’re all preceded by “products with digital elements shall:”)
·
“process only data,
personal or other, that are adequate, relevant and limited to what is necessary
in relation to the intended purpose of the product (‘minimisation of data’).”
·
“provide the
possibility for users to securely and easily remove on a permanent basis all
data and settings and, where such data can be transferred to other products or
systems, ensure this is done in a secure manner.”
The interesting point about the
requirements in this section is that they all seem to be focused entirely on
the case of intelligent devices; none of them seem to be written for what I
call “user-managed” software: i.e., software that gets installed on generic
servers, vs. coming pre-installed in a device. And since I’m currently quite
focused on the shortcomings of vulnerability and patch management in intelligent
devices, here is some of what I found in Annex 1 on those topics (there’s
a lot more that can be said about it).
1. Part II of Annex I is titled “Vulnerability handling requirements”.
The first of those requirements is “identify and document vulnerabilities
and components contained in the product, including by drawing up a software
bill of materials in a commonly used and machine-readable format covering at
the very least the top-level dependencies of the product.”
Note this is a requirement for the
device manufacturer to “draw up” an SBOM for the device, but it doesn’t say
anything about distributing the SBOM to customers. However, in Annex II, page
169, we find “(As a minimum, the product with digital elements shall be
accompanied by)… If the manufacturer decides to make available the software
bill of materials to the user, information on where the software bill of
materials can be accessed.”
To summarize, the manufacturer is
required to have an SBOM for their device for their own use. Sharing it with
customers is optional.
Also note that the SBOM is
required to document “vulnerabilities and components” contained in the product.
While it is certainly possible to document vulnerabilities in an SBOM, it is
usually much better to keep a separate inventory of vulnerabilities found in
the product. The components in the product (that is, the software and firmware
products installed in a device) will not change between versions of the device
(which is usually an update to all the software installed in the device). But the
vulnerabilities and their exploitability status designations change all the
time (e.g., the status of a particular CVE may change from “under investigation”
to “not affected” – meaning the supplier has determined that this vulnerability
is not exploitable in the product, even though it is found in one of the
components of the product). This is why it’s usually better to list them in a
separate VEX document. The VEX may have to be refreshed every day.
2. The first part of the second requirement in Part II of Annex
I reads, “in relation to the risks posed to the products with digital
elements, address and remediate vulnerabilities without delay, including by
providing security updates” Given that we’re talking about devices, there
are two significant problems with this requirement, although I’m sure they were
unintentional.
The first problem regards how the manufacturer’s
“conformance” (the EU’s favorite word, it seems) with this requirement will be
assessed. If we were talking about a software product, the answer would be
easy: The assessor (auditor) would check in the National Vulnerability Database
(NVD) for vulnerabilities that are applicable to the software product and
determine whether they had been remediated. Ahh, would it were so simple.
However, it’s not:
a)
The vulnerabilities found
in the NVD (or in any other vulnerability database that is based on vulnerabilities
identified with a CVE number) are almost all there because the developer of a
software product reported them to CVE.org. A large percentage of software
developers (especially the larger ones) do a good job of reporting
vulnerabilities that way.
b)
But as I’ve documented
recently,
and will in more detail in an upcoming post, intelligent device manufacturers
are not doing a good job of reporting vulnerabilities in their devices
to CVE.org. In fact, it seems a lot of major device manufacturers are not
reporting any vulnerabilities in any of their devices (Cisco is one big
exception to this statement).
c)
When a device
manufacturer doesn’t report vulnerabilities for their products, does this
result in a black eye for them? Au contraire! If the manufacturer has
never reported a vulnerability for their device, any attempt to look up the
device in the National Vulnerability Database (NVD) will receive the message “There
are 0 matching records”. This happens to be the same message the user would see
if the product they’re looking for is free of vulnerabilities.
d)
In other words, there
is no advantage if a device maker reports vulnerabilities in their products,
since someone searching for them in the NVD is probably likely to assume that
the devices are vulnerability-free when they see the above message; moreover,
if they find some vulnerabilities reported for a device, that will often be
held against them when comparing similar devices for procurement. An NVD user won’t
normally realize that the message really means no vulnerabilities have been
reported for the device – perhaps because the manufacturer is not even
bothering to look for them.
e)
Of course, a device
might well be loaded with vulnerabilities, even if the manufacturer hasn’t
reported any. For example, in this
post I wrote about a device that probably has over 40,000 open vulnerabilities;
yet, the manufacturer has never reported a single vulnerability for that device,
or any of the 50 or so other devices it makes.
The second problem with his requirement
is the wording regarding patching vulnerabilities “without delay”. I wonder if these
words were based on any knowledge of how device manufacturers actually patch
vulnerabilities: Except in real emergency situations such as the log4shell
vulnerability, device manufacturers hold all patches until the next full device
update. For them, “without delay” means “at the next device update”.
I had known this for a while, but
since I assumed that device makers normally do a full update either once a
month or at worst once a quarter, I thought this was fine. My guess is the risk
of leaving most vulnerabilities (that are not being actively exploited)
unpatched for three months is fairly low.
However, last year I was surprised
– if that’s the right word – to learn from a friend who works in product
security for one of the few largest medical device makers in the world, that
they just issue one full device update a year. I asked if that meant that
non-emergency patches might wait up to 364 days to be applied. He said yes –
and remember, this company makes medical devices that can keep people alive,
not smart refrigerators or baby monitors.
But the problem is worse than that,
because this medical device maker, and presumably most other MDMs (since the
company is an industry leader), never reports an unpatched vulnerability, even
to their own customers. In a software product, this policy makes sense, since after
learning of a serious vulnerability, most software suppliers will pull out all
the stops to issue a patch within a few days (or at most a week?).
But if it might literally be hundreds
of days before a device manufacturer is able to patch a vulnerability (not that
they couldn’t, but because it requires a lot of work and expense on their part),
the least they should do is to discreetly let their customers know about the
vulnerability and suggest one or two other mitigations, including perhaps
removing the device from the network until it can be updated.
However, even this is conceding too
much. A device maker should patch any exploitable vulnerability within three
months, period – plus, they should always notify their customers of it within a
week, so they can at least take whatever mitigation measures they think are
necessary.
I hope to put up a post soon that will describe these problems in much more detail, and also propose what I
think are sensible policies to address them.
Any opinions expressed in this
blog post are strictly mine and are not necessarily shared by any of the
clients of Tom Alrich LLC. If you would like to comment on what you have
read here, I would love to hear from you. Please email me at tom@tomalrich.com.
I lead the OWASP SBOM Forum. If
you would like to join or contribute to our group, please go here, or email me with any questions.
No comments:
Post a Comment