Christian Vasquez of E&E
News published two great articles today. One was a really powerful
editorial that was included in the weekly free cybersecurity newsletter that he
edits; you can read it here.
And if you’re not a subscriber to the newsletter, you should be. You can sign
up here.
The other was an article published
in Energywire; it’s available for free here.
It’s titled “DOE grid security push a test for Biden”. It makes the following
points:
1.
The Executive Order
has now been pared down to focus just on electric infrastructure that serves
military facilities in the US and territories.
2.
DoE is struggling to
put together a Notice of Proposed Rulemaking before Jan. 20, although the
article states that if anything the Biden administration will take a tougher approach
than Trump.
The article then shifts gears and
discusses general hacking threats to the grid (and especially to “defense
critical electric infrastructure” or DCEI), that don’t come through the supply
chain. I won’t discuss that part of the article, but just the part that focuses
on the EO. Here is my humble (OK, perhaps not that humble) opinion on what
the article says about the EO:
1.
If the EO were to be
enforced as written, it would be a disaster for the electric power industry. It
would require a huge expenditure of funds. And since the utilities certainly
wouldn’t be able to fund what’s required, DoE would have to do so. They would
require a huge budget increase, although they might pay for the work by selling
our nuclear weapons research facilities to the Russians or the Chinese (note, I’m
not suggesting they do this!).
2.
At least 90% of the
amount that would be spent on complying with the EO would be wasted. Yet when
you consider real supply chain cybersecurity risks to the Bulk Electric System
(or Bulk Power System, as the EO reads. They’re close to the same thing), they
would be barely touched by all of this expenditure.
3.
If the EO were pared down
to just the defense critical infrastructure but otherwise were enforced as
written, it would be a lesser disaster, but not by a lot. This is because most
of the effort required to comply with the full order will still need to be
expended to comply with the reduced scope.
4.
However, I think there
is a way the order could be re-interpreted so that it actually could mitigate a
lot of real supply chain cybersecurity risk, rather than requiring spending
huge amounts to mitigate largely imaginary risks. The cost of this effort would
still be too high for utilities to fund on their own, but if DoE were to do it,
it would not require an astronomical increase in their budget.
I have written close to ten posts
about the EO (the three most important being this,
this,
and this).
But here are my main reasons for saying the EO is woefully misguided:
1.
The EO states clearly
that its goal is to protect against cybersecurity risks due to planting of malware
and backdoors in certain devices used on the BES, by foreign adversaries. Very helpfully,
it lists 25 or so types of devices to which the order applies. But there’s only
one problem: You can only launch a cyberattack against a device that has a
microprocessor (or some other logic engine like an FPGA). Of the 25 devices listed,
guess how many have a microprocessor? As described in this
post, Kevin Perry and I could only identify three of those devices that definitely
have one, and two that might have one under certain circumstances. The other 20-odd
devices could no more be subject to a cyberattack than the fluorescent lights
in my kitchen; yet those devices are still subject to all the provisions of the
EO.
2.
Let’s now focus on the
devices that could be subject to a cyberattack. There are only two ways that a backdoor
could be planted in hardware: by compromising firmware or by altering the
microcode of a microprocessor. Firmware is just software which is loaded onto
chips (it can be updated by loading a new version onto the same chips. It’s been
a long time since a firmware upgrade meant pulling the existing chips out and
replacing them with a new set, praying all the while that you don’t bend a pin
in the process). As described in this post,
a supply chain attack on firmware would be much more difficult to pull off than
an attack on software, and more importantly would be virtually impossible to distinguish
from a garden-variety vulnerability – and firmware is loaded with those. The question
becomes: with so many vulnerabilities in any given piece of firmware, what’s
the purpose of introducing a backdoor (which is just a deliberately planted
vulnerability)? Why not just use one of the many vulnerabilities that are
already in the firmware?
3.
How about a supply
chain attack on a microprocessor? That would be devastating and would be completely
undetectable by any method that utilizes software (as discussed in the article
by Finite State, linked in the post referenced in the previous paragraph). On the
other hand, there is no record of such an attack ever being pulled off, although
it’s been rumored that the CIA has done it in certain network devices shipped
overseas by a US company.
4.
To summarize the
above, the EO focuses entirely on hardware. 4/5 of the device types that it
supposedly applies to couldn’t be subject to a cyberattack. And the remaining
five devices could virtually never be attacked except through software, which
is never even mentioned in the EO.
5.
Nevertheless, the EO
is very concerned about supply chain cyberattacks planted in hardware that is designed,
manufactured, assembled, etc. in hostile foreign nations, or that is subject to
influence by those nations (especially through ownership). Yet, in this
post, Kevin Perry and I decided that we could identify no systems that control
or even just monitor the grid (known as BES Cyber Systems) that would in any
way meet these criteria, except for
motherboards of generic Dell and HP servers used in Control Centers, And these
could obviously never be the subject of a targeted attack, since the factory in
China doesn’t know whether the server will ultimately end up at an electric
utility or a dry cleaner’s.
6.
However, there
definitely is a concern with hardware components, which are mentioned in
the EO. Most of these don’t have any embedded logic and aren’t subject to a
cyberattack, but they could be subject to counterfeiting for various reasons (usually
financial). It is fiendishly difficult to trace these components, let alone
know who designed them or where they were manufactured. But this is a worthy avenue
of inquiry to pursue. I know Fortress Information
Security is doing work in that area now.
Clearly hardware is not a source
of much supply chain cybersecurity risk, even though it is the entire subject
of the EO. However, software is very much a source of supply chain cybersecurity
risk. Many backdoors have been planted in software, and some large financial
losses have been racked up because of them.
However, by far the biggest source
of software supply chain cyber risk is simply vulnerabilities in the software,
that aren’t properly addressed by the supplier. Software vulnerabilities are
identified all the time. As long as they’re quickly addressed by the supplier –
usually through a patch – they aren’t risks at all. A risk arises if a supplier
doesn’t patch when they should, as well as in similar cases.
On the other hand, the whole idea
of protecting against hostile nation-states falls down with software, since it’s
just about impossible to say what country a software product is developed in. Because
a large portion of any software product nowadays (one recent estimate is at
least 70%) consists of components, either open source or commercially
developed, and because the actual developers of these components might be
located anywhere in the world, the very term “country of origin” is close to
meaningless when it comes to software.
So here is how I would
re-interpret the EO (although “re-interpret” isn’t correct. The right way to
say this would be to throw the EO as written in the trash can and rewrite it
completely. But since people in the White House seem to have feelings that are
easily hurt nowadays, it’s important for DoE not to actually say this is what
they’re doing):
1.
It should focus on
software, not hardware. As I said above, I think determining provenance for
hardware components is a worthwhile activity, but it’s not a cybersecurity risk
mitigation activity, so it wouldn’t fit in my rewritten EO.
2.
It should drop the
idea of country of origin for software. Yes, there have definitely been software
supply chain cyber attacks by one nation state against another (the most successful
by far being this
one by the US against the Soviet Union), but they aren’t the real problem. There
are two real problems. The first is lax security practices on the part of some
software developers, that allow vulnerabilities to appear in their software,
whether intentionally or unintentionally planted (with the latter constituting
the great majority).
3.
The second is vulnerabilities
that appear in software components (which as I said above constitute probably
more than half of the code in any software product you buy). If these were
routinely patched by suppliers just like the suppliers patch vulnerabilities in
the code they wrote, this wouldn’t be a big problem. However, in many cases the
supplier doesn’t even know about vulnerabilities in components (and they almost
never know about vulnerabilities in components of components), let alone patch them
promptly. This problem can only be addressed if software customers know what
components are in the software they operate, through receiving – you guessed it
– software
bills of materials.
So I propose that DoE, in
compliance with the “Executive Order as revised by Tom Alrich on December 3, 2020”,
launch a large project to:
1.
Identify all major software
products used to monitor or control the power grid in the US;
2.
Develop software bills
of materials for all of those products (or better yet, ask the suppliers to
develop them, although DoE could still develop their own SBoMs as a check on
what the suppliers send);
3.
Perform the difficult
task of name
resolution (to CPE names) for those software components;
4.
Using the NVD, identify
open vulnerabilities (CVEs) that apply to each component;
5.
For each final product
that contains a particular component, and for each vulnerability that was listed
for that component, coordinate with the supplier of the component – which in
the majority of cases will be an open source community – to determine whether
or not the vulnerability in the component is in fact exploitable in the final
product (this is a huge issue with SBoMs, and is described in this
post); and finally
6.
Publish for the power
industry a complete list of all exploitable component vulnerabilities in
each of the software products that is used to operate or monitor the power
grid.
This will be a huge job, and it is
clearly beyond the ability of any electric utility acting on their own. But it
will mitigate what I believe are the most important supply chain cybersecurity
risks to the power grid today. And the resources required will constitute a small
fraction of what would be required to comply with the Executive Order as it was
published on May 1.
Any opinions expressed in this
blog post are strictly mine and are not necessarily shared by any of the
clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would
love to hear from you. Please email me at tom@tomalrich.com.
No comments:
Post a Comment