The time isn’t far off when critical infrastructure (CI) industries, including the electric power industry, will face overwhelming pressure to start using AI to make operational decisions, just as AI is probably already being used to make decisions on the IT side of the house. Even the North American Electric Reliability Corporation (NERC), which drafts and audits compliance with the NERC CIP (Critical Infrastructure Protection) standards, acknowledged that fact in a very well-written document they released last fall.
However, while it’s certain there will be lots of pressure
for this in all CI industries, it’s also certain this won’t happen without some
sort of regulations being in place, either mandatory (as in the case of NERC
CIP) or voluntary, as is likely in CI industries without mandatory cyber regulations
in place, like manufacturing. My guess is those industries will develop their own
regulations through industry bodies like the ISACs, since the manufacturers
themselves are probably as afraid of the harm that aberrant LLMs could cause as
everyone else is.
I used to think that AI security regulations for CI would
need to be very much in the weeds, with restrictions on how the models can be trained,
etc. However, I now realize that trying to do that will be a fool’s errand,
since in fact there only need to be four rules:
1.
An AI model can never be allowed to make an
operational decision on its own. It can only advise a human, not make the decision
for them.
2.
The human can’t face a time limit, so that if
they don’t decide to do something in X minutes, the model will decide for them.
3.
If the human doesn’t make the decision at all,
the model can’t raise any objections. We don’t need humans succumbing to “peer
pressure” from LLMs!
4.
The human can’t be constrained by policies to
accept the model’s recommendation. The decision must be theirs alone, including
the decision not to do anything for the time being.
Of course, you might be wondering about time-critical
decisions, like the millisecond-level “decisions” that are sometimes required
in power substations. Those decisions need to be made like they are today: by devices
like electronic relays or programmable logic controllers that operate the old-fashioned
way: deterministically.
Perhaps one day AI will be so reliable that it can be trusted
to make even those decisions on its own. But that day is probably far in the future
and may never come at all. Once AI can be as intelligent as the extinct
nematode worm Caenorhabditis elegans, whose genes constitute 60% of the genome
of humans and almost all other animals, I might be persuaded to change my mind.
Any opinions expressed in this
blog post are strictly mine and are not necessarily shared by any of the
clients of Tom Alrich LLC. If you would like to comment on what you have
read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also email me If you would like to participate in the
OWASP SBOM Forum or donate to it (through a directed donation to OWASP, a
501(c)(3) nonprofit organization).
My book "Introduction to SBOM and VEX"
is available in paperback
and Kindle versions! For background on the book and the link to order it,
see this post.