From my point of view (i.e. the
only completely unbiased point of view I know of – and I can judge that point since
I’m unbiased), the biggest question mark about the May 12 Executive Order is “What
is critical software”? This is important, because the requirements for software
suppliers in section (e) of Section 4 apply to critical software (although the
EO is somewhat vague on whether or not only critical software is in scope).
The EO orders NIST to develop a
definition of that term (paragraph (g) of Section 4, pages 15-16), but then
very helpfully goes on to say, “...definition
shall reflect the level of privilege or access required to function,
integration and dependencies with other software, direct access to networking
and computing resources, performance of a function critical to trust, and
potential for harm if compromised.” And not only that, but the EO had already defined
critical software (p. 12 section (a)) as “software that performs functions critical to trust (such
as affording or requiring elevated system privileges or direct access to
networking and computing resources).”
Poor NIST. The White House is
saying “We give you complete freedom to define critical software, as long as
you use one of these two definitions.” The two definitions more or less say the
same thing: “Critical software is software whose exploit by a bad guy could
cause a lot of bad things to happen, due to the nature of the software and how
it’s installed and the privileged access it receives.” In other words, “We want
to prevent another SolarWinds from happening, so we’re going to regulate the
hell out of anything that looks or smells like SolarWinds.”
I can’t particularly blame the WH
for taking that attitude: After all, it’s a military tradition to be ready to
fight the last war, not the one you face now. But it did occur to me that this
isn’t the way we usually think of critical assets (hardware and software), especially
in the electric power industry. For example I would think of all of the
following as critical software, even though I doubt any of it runs at high
privilege levels:
·
The software that runs
the SWIFT system for international money transfers
·
The software that
controls the operation of a factory
·
The software that runs
a nuclear power plant
·
The Energy Management
System (EMS) software that balances power supply and load (demand) with
microsecond accuracy in a particular region like a major city, running in the
control centers of electric utilities
·
The software that runs
the NY subway system
In other words, I think the function
of a piece of software can make it just as critical as the privilege level it
runs at.
But there’s another thing that was
left out of the EO definition, which I pointed out in the post
I wrote the day after the EO came out: intelligent devices. These have become
more and more important in our lives and work, and they perform lots of
critical functions now. Surely some of these should be included as “critical
software” – but I couldn’t see how you could stretch the definition of “software”
to cover devices.
But in this, I overlooked the close-to-unlimited
ability of government to stretch the meanings of words through regulation! The
FDA performed a valuable service to me and the cybersecurity world by pointing
out, in their response to
NIST on the EO, that the Food, Drug and Cosmetic Act (FDCA) says that some software
is “software that meets the definition of device…”
In other words, devices are
software because an Act of Congress made them so! Problem solved (although I
must admit to not quite understanding what this means. Essentially, the FDA is saying
“Not all software is soft. Some software is hard and made out of metal, chips,
wires, etc.” This seems to me like saying “Not all dogs bark and wag their
tails. Some dogs mew and use the litter box.” It seems to me that, just as it
would be easier to say “Some household pets are dogs and some are cats”, it
would be easier to say “Some software runs on general purpose devices like
Intel-standard servers, and other software runs on dedicated, sealed devices
like infusion pumps in hospitals.” As opposed to saying the devices themselves
are software. But then, what do I know?).
In any case, the FDA goes on to
say that ‘software is “critical software” generally (i) where it meets the definition
of device and (ii) where the software is necessary for the safe and effective
use of a device.’ In other words, the FDA wants to discard the EO definition of
“critical software”, and just have the term apply to devices (of course, mainly
medical devices, since only those are defined as software in the FDCA) and the
software in those devices.
I don’t disagree with the FDA that
devices and the software they run should be called critical software, although
I would prefer expanding the term to “critical software and devices”. In that
term, I would include:
1.
Software running at elevated
privilege levels, as in the EO’s definition
2.
Devices performing
critical functions and the software included in them. This covers more than
just medical devices. For example, the electronic relays found in just about
every electric substation worldwide are crucial to the safe and reliable
operation of the power grid.
3.
Software that performs
critical functions using general-purpose hardware, such as the five items I
mentioned above.
See? Now everybody can be happy:
the White House, the FDA, me. What more could you ask for?
Any opinions expressed in this
blog post are strictly mine and are not necessarily shared by any of the
clients of Tom Alrich LLC. Nor
are they shared by the National Technology and Information Administration’s
Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you
have read here, I would love to hear from you. Please email me at tom@tomalrich.com.
No comments:
Post a Comment