I was recently given the opportunity to review a redacted copy of the TSA pipeline cybersecurity directive issued July 19. I was eager to do this, in order to answer two questions: A) Is this a step forward from existing OT cyber regulations – especially NERC CIP? If it is, it could provide a guide to how CIP can be remade to become much more effective and efficient than it currently is. Or B) is it a step backwards, meaning that if anything it will furnish a good object lesson in how not to regulate OT cybersecurity?
My answer….drum roll, please…is that
the TSA directive is a big step backwards. As such, it will provide us with
some great lessons on how not to regulate OT, and especially how not to revamp
NERC CIP. I’ve listed below what I think are some of the big failings of the
directive, but I’ve restated them as lessons learned for any agency or
organization that might someday find themselves writing or rewriting OT
cybersecurity standards.
1.
Involve the industry
in writing/rewriting the standards.[i] A Washington Post article
quoted an industry executive as saying the pipeline industry was only given
three days to respond to a draft of the standards. Of course, giving them only
three days was the equivalent of saying, “Here you go, you bastards. This is all
you deserve, and this is all you’ll get!” I’m sure if the industry had had more
time to respond, they would have said things like I’m saying below, and maybe
the TSA directive would turn out to be more effective than it will almost
certainly prove to be.
2.
Don’t make the
standards secret. What purpose does that serve? And more importantly, how could
the standards ever be implemented unless the whole organization knows about
them, and each person understands what role they can play in making sure the
implementation is successful? As it is, I believe that whoever drafted the
directive (Darth Vader?) somehow got the idea that cybersecurity for a large
organization can be achieved by having a tiny group implement a relatively small
number of very technical – and untried at scale – measures, while keeping the rest
of the organization completely in the dark about them. Here’s the real story: This_approach_could_never_work.
Period. Unless you involve everybody in the organization to some degree, you’ll
end up spending a lot of time and money implementing a bunch of controls that
will be obsoleted within months. The only lasting change will come from the
organization itself. No organizational change, no improvement.
3.
Don’t impose
requirements that are literally impossible to comply with. There are requirements
that use words like “ensure”, “prevent”, “prohibit”, “eliminate”, “block”, etc.
As if there were any way that an organization could promise that they
will ensure, prevent, prohibit, eliminate, or block anything in the constantly-changing
world of cyber attacks. Requirements like these will probably end up with zero
compliance as written. However, this result may be hidden because “compliance”
isn’t defined at all! Which leads me to the next point.
4.
Define how compliance
with the requirements will be assessed. There are a few of the requirements
that state how they’ll be assessed: through self-assessment within a certain
time period. But there’s no discussion about auditing, and it seems that it
will be entirely up to the pipeline operators to report on their state of
compliance with each of the requirements. As well as to determine how often
they will report, etc. - since I see little
or nothing in the directive that requires any ongoing reporting (although it
will certainly be required). There is a requirement for an annual third-party
evaluation of the “Owner/Operator’s” OT “design and architecture”. As far as I
can see, those are the only occasions when someone other than the pipeline
operator will look at what they’ve put in place. But there’s no indication that
these evaluations will in any way constitute an “audit”. I’m not a big fan of
prescriptive audits, but I think there does need to be something more than
simply a review of documentation.
5.
Don’t take new ideas –
that haven’t been discussed a lot in public forums - and turn them into
requirements. The directive includes very specific requirements based on
technologies about which I’ve seen very little (if any) public discussion: “passive
DNS”, “known malicious IP addresses” (is it really likely a bad guy would keep
using the same IP address?), “SOAR” (I know generally what this means, but
there must be some specific meaning in order for this to be the basis for a prescriptive
requirement), and others.
6.
Finally, make sure you’re
addressing the really important requirements. For example, the widespread outages
caused by the Colonial Pipeline ransomware attack (the inspiration for this
directive, in case you hadn’t guessed that) were due to the fact that Colonial
felt compelled to shut down their OT network, even though the ransomware had
never spread to it. Obviously, there was some link between the two networks
that necessitated the OT shutdown. What was that link? The directive goes
through all of the usual suspects and tries to address every one of them. There
are requirements for physical and logical segmentation, a DMZ, isolating “unrelated
sub-processes” (a wonderful undefined term), monitoring and filtering traffic between
different trust levels, implementing zones and conduits, prohibiting OT
protocols from traversing the IT network – just about every remedy you might
find in an ICS security textbook. Yep, they addressed everything except for what
caused Colonial’s OT network to be shut down. That one they missed,
although you can read about it here.
Oh well, that’s the only one they missed. Not so bad, no?
I don’t think I’ll surprise you by
saying I don’t think very much of the TSA directive for the pipeline industry.
What would I suggest they could have done instead? Here’s a radical idea: Why
couldn’t they have ordered pipeline companies to implement a cybersecurity risk
management program based on the NIST Cyber Security Framework? Admittedly, this
wouldn’t have been seen as an innovation, and it wouldn’t have resulted in a
Full Employment Act for cybersecurity consultants who can now put themselves
out as experts familiar with the arcane niches of the TSA directive.
But it would have been a better
way to secure the pipeline industry, on both IT and OT sides. And that ain’t
hay.
Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.
[i]
Note the TSA document is a one-time directive, not a set of ongoing standards. DHS
will institute a formal rulemaking process at some point, to draft the ongoing
standards. If so, I hope they’ll take a look at these lessons learned.
I agree the government overclassifies. I think it’s party because it’s easier (doing classification is hard - just ask anyone in cybersecurity trying to determine a company’s truly critical information is). But another contributing factor is the cost of overclassication isn’t taken into account. The points Tom raised mean the risk and cost is higher because of this over classification.
ReplyDelete