On July 4, 2016, I wrote
about a meeting I had just attended: the second meeting of the NERC “Project
2016-02 Modifications to CIP Standards”
drafting team. Coincidentally, this team finally finished their work earlier
this year, when the revised CIP standards to accommodate virtualization were approved
by the NERC ballot body, approved by the NERC Board of Trustees and submitted
to FERC for their approval.
This meeting occurred soon after an important turning point
in the CIP standards:
1.
The CIP versions 5 and 6 standards had come into
effect on July 1, 2016 – i.e., three days before I wrote the post.
2.
The “CIP Mods” team (as the new team came to be
known) was created to address issues that had come to light as NERC entities
were preparing for implementation of CIP v5 and v6.
In my post, I noted an issue that I had started writing
about at the beginning of 2016:
…since the beginning of this year I have put out a few posts
- such as this
one and this
one - that take a different tack. They don’t criticize the v5 wording,
but do criticize the whole idea of what I call “prescriptive standards” – which
is what all the CIP versions have been. It may be clear from these two posts
that I think the prescriptive approach doesn’t work for cyber security
standards, even though it might be fine for the other NERC standards. And since
the requirements of CIP v5 and v6 are prescriptive ones (with two exceptions,
which I’ll discuss below), it seemed certain that v7 will be prescriptive as
well.
I cited two other posts in the above paragraph. Near the end
of the first
of those two posts (from February, 2016), I asked, “Is CIP v5/v6/v7
Sustainable?” I pointed to the new items that were on this new drafting team’s
plate (or might be soon). I wondered whether they would result in even more prescriptive
requirements (as if the CIP community didn’t have enough of those already). I
added:
I have asked a small number of compliance people for NERC
entities what percentage of every dollar they spent on CIP compliance actually
went to cyber security and what percentage went to the paperwork required to
prove compliance. Of course, there is no objective way to measure this, but the
estimates I have received range from 35 percent to 70 percent. Maybe the
average is 50 percent, but’s let’s assume my small sample is biased downward
and the average is closer to 70 percent. Even this figure means that 30% of
every dollar spent on CIP is only spent to prove compliance, not to improve
cyber security. To phrase it differently, if we could figure out a way to get
the percentage of spending that goes to security up to 90%, we would in
principle have up to a 20% increase in security (budgets) without having to
spend a single additional dollar.
I also quoted a respected utility cyber security manager who
asked, when told that virtualization was next up in the new CIP standards
lineup, “Why can’t we just be trusted to do the right thing, rather than have
to spend a huge amount of time documenting our compliance for virtualized
systems?”
That’s really the issue, isn’t it? Why should NERC entities have
to spend so much time and money documenting compliance with prescriptive CIP requirements,
when most other mandatory cybersecurity compliance regimes (PCI, HIPAA, etc.) have
managed to develop requirements that let the entity focus on achieving an
objective, not on taking a set of prescriptive steps to prove their compliance
in every instance?
Of course, the answer to that question is that NERC entities
should not have to spend so much time and money documenting compliance with prescriptive
CIP requirements. How can the requirements be revised (or more importantly written
differently in the first place) to eliminate this problem?
I didn’t answer that question in the February 2016 post, but
I did in the July
4 post that referenced it. The second half of that post concentrated on the
most important item on the drafting team’s agenda for that meeting: fixing the definition
of “low impact external routable connectivity” (which was at the time affectionately
known as LERC) found in the original version of CIP version 6[i].
LERC (a term that was invented for use with CIP v6) turned
out to be a mistake and a big source of confusion when NERC entities prepared
to comply with V6. Briefly, LERC was intended to be the low impact version of
External Routable Connectivity (ERC), a term invented with CIP v5, which only
applies in high and medium impact environments.
However, it turned out that the requirement where LERC
applied (CIP-003-6
R2, Attachment 1 Section 3) implicitly allowed LERC to be “broken” by a number
of factors; this led to long arguments between NERC entities and their auditors
about whether LERC was completely broken by a particular measure like a data
diode, or whether it was only partially broken by a measure like a firewall. In
the first case, the entity would have been found compliant with the requirement;
in the second case, it would not have.
It was quite interesting to watch the SDT, during the 2 1/2-day
meeting, evolve from trying to make this a workable prescriptive requirement to
realizing that couldn’t happen, since doing so would make both the requirement
and the definition a hopeless mess. At that point, it seems “another observer” (although
I can’t remember who that was) suggested that the SDT should “rewrite the
requirement so that it simply stated that an entity with LERC needed to take
steps to mitigate the risk posed by LERC and discuss different options for
doing this in the Guidelines and Technical Basis section.”
In other words, the other observer realized it was a huge waste
of time to torture the requirement until it yielded a fixed set of measures that
the entity could take to “break” LERC (and thus comply with the prescriptive basis
of the requirement). Instead, the person suggested it was better to “rewrite
the requirement so it simply stated that an entity with LERC needed to take
steps to mitigate the risk posed by LERC and discuss different options for
doing this in the Guidelines and Technical Basis section.”[ii]
I found this to be quite interesting, because the other
observer admitted it would never be possible to treat this as a purely prescriptive
requirement, for which the Responsible Entity would be given an up-or-down
judgment of compliance. Instead, the entity needed to decide what was the best
way for them to mitigate the risk in question; it was up to the auditor to
decide whether the measures the entity took sufficiently mitigated the risk.
Folks, this was heady stuff! Here was a revised NERC CIP requirement
for which the goal of compliance was simply to mitigate the risk addressed by the
requirement, without trying to a) prescribe the steps required to mitigate that
risk or b) attempt to measure the total risk that was mitigated by the steps
the entity took (which would have been a fool’s exercise, in any case).
In other words, this may have been the first NERC CIP
requirement (and perhaps the first NERC requirement, period) whose objective is
clearly risk mitigation[iii]
and nothing more (there already were at least two requirements in CIP version 5
that were in fact risk-based, including CIP-007 R3 anti-malware and CIP-011 R1
information protection. However, neither one of them explicitly mentions risk
or risk mitigation).
Since 2016, other risk-based (which NERC calls “objectives-based”)
requirements, and even whole standards (CIP-012, CIP-013 and CIP-014) have been
developed and put into effect. Moreover, I am reasonably sure that after prescriptive
requirements like CIP-007 R2 and CIP-010 R3 were developed as part of CIP version
5 (in 2011 or 2012, probably), not a single new prescriptive CIP requirement
has been developed. Moreover, I’m certain that no new prescriptive CIP requirements
ever will be developed; there clearly is no more appetite for them in the
NERC community.
This leaves open the big questions, How are risk-based CIP requirements being audited today? And how should they be audited? These questions are especially important because any new or revised CIP requirements that will apply to assets in the cloud will have to be risk-based. Will developing procedures for auditing risk-based CIP requirements require changes to the NERC Rules of Procedure? Given that I haven’t found anyone who can tell me exactly what is required to change the RoP, that might not be a lot of fun…
Any opinions expressed in this
blog post are strictly mine and are not necessarily shared by any of the
clients of Tom Alrich LLC. If you would like to comment on what you have
read here, I would love to hear from you. Please email me at tom@tomalrich.com.
My book "Introduction to
SBOM and VEX" is available! For context, see this post.
[i]
That definition was “Direct user-initiated interactive access or a direct
device-to-device connection to a low impact BES Cyber System(s) from a Cyber
Asset outside the asset containing those low impact BES Cyber System(s) via a
bi-directional routable protocol connection.”
[ii]
The “Guidelines and Technical Basis” section of CIP-003 has been renamed “Supplemental
Materials” and is now found at the end of the CIP-003-8 standard (the current
version) starting on page 33. It provides diagrams of ten possible methods
which the entity might take to mitigate the risk posed by LERC, without saying
which of those is the “best” or the “most compliant”.
[iii]
CIP-003-8 R2 Attachment 1 Section 3 doesn’t explicitly mention risk. However, it
requires the entity to “Permit only necessary inbound and outbound electronic
access as determined by the Responsible Entity.” Since the entity is allowed to
determine for itself what constitutes “necessary” access and since any judgment
about whether something is necessary must always consider risk, this makes it a
risk-based requirement in my book.
No comments:
Post a Comment