Wednesday, August 16, 2017

The best press article on NERC CIP ever written

After yesterday's post, someone reminded me about a press article that appeared in Energy and Environment News in January 2016, which I had quoted in full in this post. It is very relevant to what I was writing about yesterday, and goes beyond that.

It is without a doubt the best news article I've ever seen on NERC CIP. Almost every other article I've read makes me want to cringe, although I'll admit CIP is a very difficult subject to get your arms around. You need to really be willing to sit down and devote a lot of time to learning about CIP and its whole context. Pete Behr is one of the few reporters that has done that. In fact, I'd say he's the only one who has done that.

Tuesday, August 15, 2017

The Horror! The Horror!


There has been a lot of talk in NERC circles lately about guidance for the CIP standards. This is largely driven by NERC’s recent efforts to “clarify” the status of the many types of guidance they have put out about CIP v5 and v6, and now CIP-013. I would like to give your my own account of those guidance efforts.

In the long-ago days of CIP v3, concern began to grow about inconsistency between auditors and between regions in the interpretation of the CIP requirements. This led to NERC’s creating two series of documents: Compliance Application Notices (CANs) and Compliance Application Reports (CARs).  NERC thought – along with most of the industry, to be sure – that simply having NERC state its opinion on certain controversial topics would lead the regional auditors to put aside their differences and all start singing from the same page in the hymnal.

Unfortunately, things didn’t work out too well for the CANs and CARs. They were attacked roundly from many sides, and most importantly the auditors saw no reason to feel bound by what these documents said. After all, where was the basis for them in the NERC Rules of Procedure? The answer is “Nowhere”. This led to most, although not all, of the CANs and CARs being withdrawn (a few of the less controversial ones remain technically in force).

This was considered to be a good learning experience for NERC. People said, “Well, at least NERC will make sure that the next CIP version (which was expected to be numbered v4 at the time) doesn’t have these ambiguities, so there will be no need for these extraordinary measures in the future.” However, I would say that many people in the NERC community today would gladly exchange the huge level of uncertainty in CIP v5 and v6 with the much more modest level of uncertainty in CIP v3 (and coming soon to a NERC Regional Entity near you: CIP-013!). Yes, those were the days…

When FERC issued their Notice of Proposed Rulemaking (NOPR) in April 2013, which said they intended to approve CIP v5 (and would send CIP v4, which had been approved for implementation in April 2014, to sleep with the fishes), I decided to write a series of posts on v5.

What I found was disturbing. I started out with CIP-002, since that is the first standard. I tried to figure out exactly what CIP-002 R1 (with Attachment 1) required the entity to do. And I literally came to a dead end: The logic broke down so completely that there was no way to go forward without taking a big leap of faith. I went on to write probably 100-150 more posts on problems with CIP v5 over the next 2-3 years and cataloged a wide range of problems, especially having to do with CIP-002 and its associated definitions.

At this point I started wondering how these problems could be fixed. My first hope was for FERC – when they actually approved v5 – to simultaneously order NERC to fix the problems, or at least the really fundamental ones in CIP-002-5.1 R1 (since that is the foundation of the rest of the current CIP standards).

However, when FERC approved CIP v5 in Order 791 in November 2013, they broke my heart by not telling NERC to address any of these problems. And the next month, at a NERC CIPC meeting in Atlanta, I asked a highly-placed NERC staff member whether NERC would of its own accord include this problem in the Standards Authorization Request (SAR) that would guide the drafting team for CIP v6[i]. His answer was as concise as possible: “No”.

This was a very disappointing answer, since I believed it meant there was now no way to truly fix the problems with CIP v5. I believed this (and still do!) because the NERC Rules of Procedure allow no other mechanism to address problems with a standard than to write a SAR and convene a Standards Drafting Team to revise the standards. Yes, this is a very time consuming process – especially given the magnitude of the problems in CIP v5 – but it is the only way to fix problems, rather than simply attempt to paper them over.

However, life goes on. The fact that there were a lot of problems with CIP v5 didn’t mean that NERC entities didn’t have to comply on April 1, 2016 (the original compliance date) – they still had to do that. My attention then turned to the next question: What would NERC do to at least mitigate these interpretation problems? I first asked this question in this post, and you could say that each of the next 100 posts asked the same question.

I won’t reiterate for you all the many twists and turns of NERC’s admittedly well-intended efforts to provide guidance on complying with CIP v5. At first the Guidelines and Technical Basis were going to do the trick, then the RSAWs, then the CIP v5 Implementation Study, then the FAQs, then the Lessons Learned, and finally the Memoranda (I’m probably missing three or four things in this list and I know they overlapped, so the order isn’t at all hard and fast).

Each of these different efforts was touted by NERC at one point as being the final answer to the ambiguities of CIP v5, yet each of them was ultimately abandoned. What finally brought this process to an end was the Memoranda, which caused huge contention and were withdrawn in spectacular fashion at a meeting on July 1, 2015.

At that point, NERC seemed to me to have raised the white flag and admitted that there was no definitive way – other than by writing a SAR and convening a new SDT – to address problems with standards; they said they would do exactly this (and that team is still working today). They also seemed to be pointing toward a more ecumenical guidance process where other groups could also provide guidance and NERC would publish those documents that it believed had merit. And here’s the kicker: It seemed they were finally admitting that all credible guidance, from whatever source, should be given consideration by both entities and auditors.

But there was another implication to what NERC said: that in the case of ambiguity, it is ultimately up to the entity to decide what the CIP v5 requirements and definitions mean. Because if a) the standards are ambiguous (which NERC admitted) and b) NERC can’t provide definitive guidance (by which I mean guidance that the auditors are bound to follow in their audits), then there really is no 100% right or wrong way to comply with a CIP requirement.

And here’s where “Roll your own” comes in. In September 2014, I wrote the first in what turned out to be a series of posts on how NERC entities were dealing with ambiguity in CIP v5. That post described how one entity had decided they couldn’t wait for NERC to come out with definitive guidance on v5 – specifically, on what “programmable” means in the Cyber Asset definition – and had simply developed their own guidance. Just as importantly, they had documented what they had done. The person I talked with argued that, if an auditor three years from now disagrees with the definition they came up with, they will simply show him or her the documentation of how they arrived at this definition, including the fact that they reviewed all available guidance before doing this.

This was a turning point for me, because in the almost three years since I wrote that post it has now become completely clear to me as well as almost all of the rest of the NERC community (including entities and auditors) that this is the only way to comply with CIP v5 and v6: You simply have to get out your plywood and nails and patch over whatever logical chasms you come across, so that you can cross them and get on with compliance. But the key is documenting what you did; I hope you all did that (at least if you have High or Medium impact assets), but even if you didn’t, it’s not too late to do so.

Since July 2015, NERC has more or less adhered to what they said that month. They have convened an SDT to address at least some of the problems with CIP v5[ii], and they have moved to a guidance framework that allows a number of organizations to develop guidance and have it “approved” by NERC. However, there is one way in which NERC seems to be relapsing into its old mindset: It once again seems to believe that it can develop guidance (or approve particular guidance developed by others) that is better than anybody else’s guidance, and therefore will be given some sort of “priority” by the auditors when they audit. I believe the current idea is that “implementation guidance” written by the SDT that developed a standard should and will be given extra attention, both by entities and auditors.

But don’t believe it. Let me repeat, in case you weren’t paying attention earlier:

  1. No CIP guidance of any kind, whether written by a NERC SDT, the NERC Board of Trustees, Thomas Jefferson, Baha'u'llah, Saint Paul, or the Dalai Lama, has any greater validity than any other guidance. In particular, the auditors aren’t bound to follow any particular guidance.
  2. However, you should consider all available guidance as you do the only thing you can do when faced with an ambiguous requirement or missing definition: decide for yourself the best approach, and document how you came to that conclusion (for an alternative and more far-reaching approach than “Roll your own”, see this 2014 post about an article by Lew Folkerth of RF).

Of course, now we have CIP-013 coming up, and that presents a whole different set of guidance issues…

The Horror!


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] The v6 SAR only included the four things FERC had mandated in Order 791. None of them were fixes to the numerous wording problems I and others had found in v5 thus far.

[ii] Although, as I will say in an upcoming post, I don’t believe that SDT will ever address everything that is on its plate. And I also don’t think, absent new FERC orders, there will be any further changes or updates in NERC CIP – unless the standards are completely rewritten from scratch.

Wednesday, August 9, 2017

A Good Idea


My most recent post lamented that every NERC entity subject to CIP-013 (i.e. those with High or Medium impact assets) has to, every 15 months, identify new supply chain security threats and mitigation measures, and incorporate the relevant ones into their Supply Chain Security Management Plan; this per CIP-013 R3. I pointed out (very astutely, I might add) that it didn’t make sense for each entity to have to review the same information and draw the same conclusions. Why couldn’t there just be one body that did this for all NERC entities (although the entities would be free to add to the list of new threats and mitigations provided by this body)?

My answer to that question asserted that there is no provision in the NERC Compliance Monitoring and Enforcement Program (CMEP) for this; therefore, NERC entities are doomed to comply with R3 completely on their own. I then went on to point out that this is a general problem for CIP: there is simply no way to incorporate new threats into CIP and require entities to comply with them, other than writing a Standards Authorization Request (SAR), convening a Standards Drafting Team, going through 3 or 4 NERC ballots, submitting the new or revised standard to FERC, waiting for them to approve it, etc. At the most optimistic, that’s a 3-year process, but I will soon write a post that asserts that the window has closed for any future modifications to CIP, except modifications ordered by FERC (there are various reasons for this, but it is primarily because the industry has been exhausted by all the interpretation issues with CIP v5 – which will never be resolved – and isn’t exactly looking forward to having a bunch of new ambiguous standards dumped on their table).

However, an auditor emailed me to say that he thought there were at least 4 existing organizations that could fulfill this role. When I replied that the problem wasn’t that no organization could do this but that there was no provision in CMEP allowing them to do so, he pointed out to me that “The Standard guidance suggests that entities need to review actionable information to identify needed changes to their plans.  No one said where that actionable information has to come from.”

And this makes sense to me. For the purposes of CIP-013 R3 compliance, I believe it would be fine if some third party organization, like one of the trade associations but not limited to them, committed to doing this for all NERC entities. That is, they would continually look for new supply chain security threats and mitigation measures and publish these for the whole NERC community (and if an organization just wanted to do this for their members, then hopefully other organizations would do the same for their members). Any takers for this? I won’t name names, but I can think of at least a couple organizations who would be ideal for this.

However, my larger point in the post was that a procedure like this is really needed for all cyber security threats to BES Cyber Systems, not just supply chain threats in CIP-013. So it would be nice if there were a body that would regularly (or even continually) review all new cyber threats as well as all new mitigations to cyber threats, identify those that are relevant to the electric power industry, and publish these for the industry (perhaps on a need-to-know basis). If CIP were rewritten along the lines of the six principles in the post I referenced above, then NERC entities (probably above a certain size threshold) would have to get an assessment based on the threats on the current list, then mitigate those threats.

But that can’t happen now, given the current CIP standards and CMEP. When a new threat arises now, like phishing or ransomware, the only “legal” way to address it, according to the NERC Rules of Procedure, is to go through the SAR process I described above. Obviously, in the case of phishing and ransomware (both of which have been threats for years), nobody has even suggested a SAR, and as I already said, I don’t think there will be any more changes to CIP that aren’t FERC-ordered. So phishing and ransomware will never be addressed within the current CIP framework (despite the fact that the Ukraine attacks all started through phishing). This also applies to as cloud threats, and many more current and future ones. I believe that none of these will ever be addressed, given the current CIP-002 through -011 wording, which doesn’t have any provision for addressing new threats, as in CIP-013 R3.

But in my ideal world, which I described in this post (the last few paragraphs, although to understand them well you need to read the whole thing), CIP would be totally rewritten and CMEP would be revised[i], allowing threats and mitigations to be continually updated without having to revise the standard itself. If you look at number 3 of the six principles I list in that post, you’ll see it calls for the entity to continually update its list of threats; all the threats on the list have to be addressed in some way, although on a risk-adjusted basis (so threats that pose less risk would require less mitigation work and might in some circumstances be completely ignored if they really don’t apply to the particular entity). This list could – and should – be maintained by a central industry body, although the entity would be free to add other threats to the list if they felt this was warranted.

So this is where I see an industry body – which could well be one of the trade associations, etc. – being able to finally solve the problem caused by the fact that CIP doesn’t have a good way to respond to new threats (other than CIP-013, of course). But this can’t happen until CIP is completely rewritten and CMEP is revised. As I said in the last post, I don’t think NERC is likely to do either of these any time soon, which is why I’m pessimistic that FERC and NERC will be allowed to keep responsibility for cyber regulation of the power grid much longer. But I’ve been wrong before!


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] Although maybe there would need to be two NERC CMEPs. The existing one would be for the 693 standards, and the new one would be for CIP. If you look at the list of six principles for the “new CIP” listed in the post just referenced and you try to figure out how the new CIP standard would be audited, I think you’ll agree that it would be almost impossible to do that with the existing CMEP. In fact, I think auditing CIP-013 and CIP-012 will be very challenging with the existing CMEP, since these are much more like what I would like to see in the new CIP. But even the existing CIP-002 through CIP-011 don’t fit in very well with CMEP, which is one of the reasons there are so many continuing problems.

Thursday, August 3, 2017

CIP-013 R3


I intend to start writing a lot about CIP-013 in the coming few months. There are two reasons for this:

First, the standard has recently made great strides toward coming into effect. With the third and final ballot passing, the chances are just about 100% that the NERC Board of Trustees will approve CIP-013 (and CIP-005-6 and CIP-010-3) this month. This will then go to FERC. A day ago, I would have pointed out sarcastically that FERC doesn’t have a quorum and may not for a while, so it’s questionable whether the standard will be returned to NERC with the notice “Nobody at this address”.

However, just a couple hours ago the US Senate confirmed two new Commissioners, meaning that soon FERC will have a quorum. Given the huge backlog that FERC has (including CIP-003-7, which includes the revised “LERC” requirement), I’m betting it may take up to a year for them to approve CIP-013. And there’s still a small chance they will simply remand it to NERC (or even kill Order 829 altogether). But my guess is they will approve it.

Second, CIP-013 is a very interesting standard. Complying with it is completely different from complying with any of the previous CIP standards; in fact, it’s completely different from any previous NERC standard (although CIP-014 comes closest to it). The good news is that it is (almost) entirely an objectives-based standard, which is what I (and many others) have been saying for some time is how all of the CIP standards should be written (there are some objectives-based requirements in the existing CIP standards, like CIP-007 R3 and CIP-010 R4. But none of the standards themselves are objectives-based. This fact is actually quite significant, as I’ll explain in later posts). This means that compliance with the standard is much more like what your organization would do if you were mandated by your organization to address cyber risks in your supply chain, and given a healthy pot of resources to do it with: You would rank the threats by the degree of risk they pose, and allocate your funds for remediation on that basis.

That was the good news. The bad news is that auditing CIP-013 is also very different, and the auditors are still going to have to follow all the provisions in NERC’s Compliance Monitoring Enforcement Program (CMEP) and Rules of Procedure; these two documents are completely focused on prescriptive, non-risk based standards.[i] So it’s very much an open question whether the auditors will be able to completely change their style of auditing, while still paying obeisance to those two documents.

In any case, I will be writing a lot about CIP-013, and here’s the first in this series. What I want to discuss now is – as those of you who are super-alert may have noticed from the title – CIP-013 Requirement 3. If all you do is read the requirement as it stands in the final draft of CIP-013, you might be excused for thinking it’s fairly innocuous. It reads “Each Responsible Entity shall review and obtain CIP Senior Manager or delegate approval of its supply chain cyber security risk management plan(s) specified in Requirement R1 at least once every 15 calendar months.”

You might think, “Hey, this one’s a piece of cake. All I have to do is stand outside some suit’s door for five minutes once every 15 months to get them to sign a piece of paper. I wish everything else in CIP were this easy.” But you’re wrong about that. This requirement just reveals the tip of the iceberg. To understand why, I’ll quote the first draft of CIP-013 (you remember that? The one that got about nine percent approval?), where this is requirement 2, not 3:

R2. Each Responsible Entity shall review and update, as necessary, its supply chain cyber security risk management plan(s) specified in Requirement R1 at least once every 15 calendar months, which shall include:
2.1. Evaluation of revisions, if any, to address applicable new supply chain security risks and mitigation measures; and
2.2. Obtaining CIP Senior Manager or delegate approval.”

As you can see, the only real difference between the two drafts is part 2.1 in the first draft. Let’s unpack what it says:

  1. There may be new supply chain security risks and mitigation measures that have appeared since your plan was last approved.
  2. You need to consider these – and my guess is auditors will want some sort of assurance that you at least considered all new risks and mitigation measures, in some way or another. They won’t let you get away with saying you considered just every other threat, or just all those threats that begin with letters A through G. There isn’t really a good way to limit what you have to consider.
  3. If you find any new threats or mitigation measures (and guess what? In the world of cyber security, there are new threats all the time. Fortunately, there are new mitigation measures all the time as well), you would be well advised to modify your plan to address them.
  4. And since you’re not treating all the vendors and systems the same (remember, this is a risk based standard?), you really need to look – at least in principle - at how these new threats and mitigation measures apply to each BCS you purchase - or each piece of software that goes on a BCS – as well as to each vendor you purchase these from.

Does this strike any of you as a lot of work? Some people who voted no on the first ballot commented that this is really an open-ended commitment. How many news articles, blog posts, vendor notices, threat intel feeds, Tweets, etc. do you need to peruse in order to be able to say that you have at least considered all threats and mitigation measures?

You may ask me, “Why are you bringing this up? After all, the first draft of CIP-013 was voted down; what finally passed (the third draft) says nothing about addressing new risks or mitigation measures.” And I will agree with you that this directive is no longer in CIP-013 R3. However, it’s not really gone. And to find that out, you need to look no further than the Implementation Guidance written by the SDT (and here I’m quoting from the revised version that was recently circulated by the SDT – page 9, in the discussion of R3):

“A team of subject matter experts from across the organization representing appropriate business operations, security architecture, information communications and technology, supply chain, compliance, legal, etc. reviews the supply chain cyber security risk management plan at least once every 15 calendar months to reassess for any changes needed. Sources of information for changes include, but are not limited to:
Requirements or guidelines from regulatory agencies
Industry best practices and guidance that improve supply chain cyber security risk management controls (e.g. NERC, DOE, DHS, ICS-CERT, Canadian Cyber Incident Response Center (CCIRC), and NIST).
Mitigating controls to address new and emerging supply chain-related cyber security concerns and vulnerabilities
Internal organizational continuous improvement feedback regarding identified deficiencies, opportunities for improvement, and lessons learned.”

Doesn’t this look a lot like the SDT is saying you still need to do what was in Section 2.1 of R3 in the first draft? I’ll answer that question for you (since I didn’t hear anybody say anything): Yes, it does. You might then ask, “Well, how can they remove part of a requirement from one draft to the next, but still say in their guidance that you need to effectively comply with the first draft?” I really don’t know what to say to that, except “They did it.”

And I wouldn’t suggest that you tell your auditor that you're ignoring what is in the Implementation Guidance, since in this case it goes beyond what the requirement says. Yes, I know that no NERC guidance is supposed to go beyond what the strict wording of the requirement says – that’s perfectly true. But it also may be perfectly true that your auditor’s baby is ugly, and I don’t recommend you tell him that either.

To be honest (every now and then I try to be honest. It’s a good exercise), I think this requirement is a step toward addressing one of the biggest problems with the current CIP standards: the only way they can be made to address new threats is by someone writing a SAR (which gets approved by NERC and FERC), a drafting team being constituted, a new standard or requirement being drafted, several NERC ballots, NERC BoT approval, and finally FERC approval. This process always takes years, and I’ll be writing in a new post shortly that I’m now convinced there will never be any new additions to the CIP standards unless FERC explicitly orders them. In other words, the threats currently not addressed by CIP, including phishing, ransomware, cloud-based threats and more, will never be addressed unless FERC issues a new order (and given the political makeup of the new FERC, it’s not likely these Commissioners will be eagerly looking for new ways to extend any regulations).

So the fact that the SDT decided (with prodding from FERC) to provide some mechanism for addressing new threats to supply chain security on an ongoing basis is a good thing. What’s not a good thing is this: Why should each NERC entity have to go through the same set of blog posts, news articles, etc. to find out if there are any new supply chain security threats or mitigation measures? Why not have an industry body which does that regularly, and provides the information to all NERC entities?[ii]

The answer is this: There is no mechanism in the NERC Rules of Procedure or CMEP to have such a body. Think about it: This body would essentially be writing new CIP requirements, since entities would have to address these new threats in some way. The current NERC operating environment allows no way to officially incorporate new threats into CIP, within any period much less than 4-5 years.[iii]

And this leads me to my final question: What would need to change in order for CIP to be able to be able to quickly address new threats? Obviously, the existing standards would have to be changed, but more importantly the whole NERC standards environment would need to be changed as well. That is, there would need to be a new CMEP (and perhaps a new RoP), or at least a new CMEP that only applies to cybersecurity standards. That way, there could be a body that could continually examine new threats, and add important ones to a list of threats that must be addressed by NERC entities subject to CIP.

This would require NERC to make a huge change. Could they do this? Of course they could. Are they likely to do this? I’d say probably not. So does this mean things will stay as they are forever? Yes it does, until either a) NERC decides to change or b) NERC (and probably FERC) no longer have responsibility for the cyber security of the electric power industry. Currently, I’d say these are about equally probable. What isn’t probable at all is that this situation – the fact that the CIP standards have no effective way to be updated to incorporate new threats – will be allowed to stay in place forever. I believe Congress will ultimately step in if NERC doesn’t do anything about this.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.


[i] And before you point out to me that the Reliability Assurance Initiative – now called Risk-Based CMEP – takes risk into account, I want to point out that the risk that RAI deals with is compliance risk – i.e. the risk that your organization won’t comply with a particular requirement. A risk-based standard is one in which the entity is allowed to align their controls with the risk posed by a particular threat – e.g. the risk that Vendor A’s patches will introduce malware into your system, vs the risk that Vendor B’s patches will do so. The requirements in CIP-013 allow that, whereas most of the other CIP requirements don’t.

[ii] Of course, the E-ISAC provides information on new technical threats, but there are a lot more threats that aren’t technical ones, plus new mitigation measures just aren’t in their bailiwick.

[iii] I know there are NERC Alerts, which try to do at least something when a new threat appears – as in the case of the Ukraine attacks. But these don’t mandate anything but a report on what the entity is doing about the new threat. While I’m sure most entities will take action on the alert, they don’t mandate the entity do something, as a new CIP requirement would.

Thursday, July 27, 2017

Sean McBride Writes a Great White Paper


I have written at least a couple times previously about things Sean McBride has written, including in this post from 2014 (which by the way I just read and found quite interesting, and every bit as relevant now as back then). He has always been one of the leading experts on ICS security and especially ICS vulnerabilities. He founded the firm Critical Intelligence, which is now part of FireEye (although the name is no longer used).

He recently published a white paper that provides a great overview of what he calls “cyber-attacks”, which he defines as attacks that cause physical damage, and that originate from “state-nexus” perpetrators. He shows that these have been increasing in frequency and severity, and he makes the argument that this trend will only increase. I recommend you read this!



The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

Monday, July 24, 2017

GridSecCon 2017


It may seem to be a little early to toot the horn for NERC’s GridSecCon  , since it won’t occur until October 17-20 (this year it will be in St. Paul, MN, a really beautiful city). However, I did notice that the early bird sign-up special of $300 will expire on Friday, July 28.

I have attended four GridSecCons, and after the first one I vowed I wouldn’t miss another. Each year it only seems to get better. So I already had my ticket when I was notified last Friday that I was selected to participate in a panel (on Thursday afternoon) on Supply Chain Security (Sharon Chand of Deloitte will also be speaking, in a panel on Insider Threat on Wednesday morning).

To be succinct, I have said for several years that GridSecCon is the single most important annual gathering of people involved in cyber security for the electric power industry. The presentations are always excellent, but what is most important are the networking opportunities. It sometimes seems as if everybody I would ever want to meet in the industry is at this conference, even though I know that’s not true. But I always meet a lot of interesting people, even ones I had never known about until we met!

If you haven’t yet been to GridSecCon, I highly recommend you try it this year.



The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

Thursday, July 20, 2017

The Department of Cybersecurity?


Sometimes I read an article on line or in the physical paper (yes, I still get those!) that makes me smack my head and say “Of course! This makes perfect sense!” And I like to feel that I would have sooner or later come to the same conclusion – it’s just that the author has already done that for me.

Such was the case when, in the July 12 Wall Street Journal, I read an Op-Ed piece called “America isn’t Ready for a ‘Cyber 9/11’”. The title was a little misleading, and I almost didn’t read it; I figured it was yet another alarming piece about how the whole grid was going to collapse from a massive cyber attack either tomorrow or the day after tomorrow at the very latest.

However, it wasn’t about that at all. Rather, it was making the point, which I’ve made a couple times, that the Wannacry and not-Petya attacks (which they call Petya in the article – I’ll excuse this minor error) were quite different from most previous attacks in that they weren’t aimed at particular sectors or companies, or even countries (although not-Petya was clearly aimed at the Ukraine at first, but didn’t confine itself there for very long).

And the authors draw a perfectly reasonable conclusion from this: If such broad attacks are going to be the norm from now on, perhaps the current state of extremely fragmented cyber security oversight isn’t the best way to protect the country going forward (they state there are at least 11 Federal agencies with jurisdiction over some aspect of cyber, but they also seem to have left out two more that you may be familiar with: NERC and FERC. Raise your hand if you’ve ever heard of them).

And it doesn’t take long for the authors to come to a conclusion that I would have considered ridiculous a few months ago, but which now seems to me to make much more sense: There should be a Federal Department of Cybersecurity. When a big, broad attack happens, why have a bunch of different agencies that need to figure out for themselves what it’s about and how to deal with it? Why not have one agency responsible for figuring out what’s going on and how to respond, which then has different divisions responsible for different sectors, whose job would be to carry out that response in their sector (this part about the divisions is my extrapolation from their article, which doesn’t itself discuss how the Department would be structured)?

But let’s not stop there. This same Department of Cybersecurity should also be responsible for developing and enforcing cyber security standards - in some cases mandatory, in others voluntary. Again, the sector-level divisions would be responsible for interpreting, evangelizing and enforcing those standards in the different sectors.

Of course, electric power would be one of those divisions. However, I think it would be part of a “super division” of what might be called “process infrastructure” (which is a terrible name, but all I can think of at this hour). This would include all of the critical infrastructure sectors that are responsible for maintaining a particular process. Of course, the process for the power sector is the Bulk Electric System. For oil refineries, it’s the production of petroleum byproducts. For natural gas pipelines, it’s the interstate distribution of natural gas. A couple other sectors in this group would be chemical plants and petroleum pipelines.

Within this “super division”, there would be a core group of ICS security experts that would address threats to ICS assets, and formulate both responses to attacks and standards for maintaining cyber security (again, mandatory or not, perhaps depending on the criticality of the sector. Unfortunately – you guessed it – the electric sector is about as critical as it gets). Then there would be groups of “implementers” and assessors (which, truth be told, would also be auditors) who would carry out the responses and interpret/enforce the standards in the particular sectors.

And what would be the standards they would enforce? This may surprise you greatly, but I don’t actually advocate that the NERC CIP standards be generalized and made applicable to all critical infrastructure sectors. What do I propose instead? I am proposing a very different set of standards (or really, just one standard with one requirement), based on a very different compliance regime than the NERC one (again this probably surprises you greatly). I currently have identified six principles that would form the basis for that regime, which I listed (without any embellishment) in this post.

You’ll notice these principles are general, applicable to any process industry. When I wrote the above post, I was thinking that each of those industries (gas pipelines, electric power, etc) would implement the same principles, but tailored to their industry (i.e. their particular infrastructure, like the BES when you’re talking about electric power). I now realize that it makes a lot more sense to have a single group of ICS experts that handles the functions that are common to all the process industries, with sector specialists who apply them to the different sectors; i.e. the structure I described above.

Will this happen? I’m not positive there will be a Department of Cybersecurity, although I think that would be the best solution. An intermediate solution would be to combine regulation of the process infrastructure sectors into this “super division”, and have either the Department of Energy or the Department of Homeland Security “house” it. This wouldn’t provide the synergies with industries like banking and healthcare that a Dept. of Cyber would provide, but it might be a lot easier to implement and would at least unify the process infrastructure industries.

One thing I am fairly sure of: In three years, neither NERC nor FERC will be responsible for regulation of the cybersecurity of the power grid. This is not because they’ve made mistakes as the regulators (although they have – again, I realize it will surprise you greatly to hear me say this), but because the logic of combining the sectors in this way is too compelling. At that time, you’ll look back at your happy days of laboring in the salt mines of NERC CIP and wonder what you were thinking.



The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

Monday, July 17, 2017

Device Drivers, Part II


In my last post, I discovered – to my surprise, I’ll admit – that auditors are expecting NERC entities to patch device drivers to comply with CIP-007-6 R2, despite the fact that this won’t always be easy. However, there’s a related question: are they also expecting entities to include device drivers in the configuration baselines they develop for CIP-010-2 R1 compliance?

The answer to that question is yes. And here I can quote Lew Folkerth of RF, an ex-auditor now leading outreach for CIP at RF (and a major contributor to what must have been at least 20-25 of my posts, although it seems more like 500). He says:

“Device drivers that are not included in the operating system are software that should be identified in the baselines and patched accordingly. The entities I’m familiar with handle these device drivers as part of normal patch management. You identify any drivers in use that are not part of the OS as separate software in the baseline. You identify a patch source for the driver, usually the card or device manufacturer. Then you monitor that patch source on a monthly basis.

Some vendors will track patch sources for device drivers as part of their automated service. I’m sure pricing will vary.”

Note that Lew limits this to drivers that aren’t included in an O/S. He says this in part because the Guidelines and Technical Basis[i] for CIP-010-2 R1 includes the sentence ““The SDT does not intend for notepad, calculator, DLL, device drivers, or other applications included in an operating system package as commercially available or open-source application software to be included.”[ii]

But here’s another question: Does this “exemption” for drivers that are part of the O/S (and also open-source application software) apply to CIP-007 R2 as well? Lew’s answer – as well as that of an experienced auditor from another region – is a definite yes. So I may have been wrong in assuming that having to cover device drivers under R2 would present a huge burden to NERC entities. It seems that what they need to track are primarily hardware device drivers not included with an OS – and this should be a much more manageable quantity.[iii]

Postscript
The quote below is from an email that one of the auditors mentioned in my previous post sent me while I was writing the post. It expands on his thinking regarding device drivers in CIP-007 R2:

“CIP-007-6 / R2 requires a patch management process for tracking, evaluating, and installing cyber security patches for applicable Cyber Assets.  Nowhere in that Requirement statement is there an exclusion for device drivers.  Device drivers are software and they are just as vulnerable and just as exploitable as any other installed software.

“Device drivers come in two flavors.  Many, but not all, are included with the operating system (e.g., Microsoft Windows 7).  If you are tracking patches for the operating system, then you will also encounter patches for the drivers included in the operating system.

“Some drivers, especially network interface drivers, are separately installed.  Just as they need to be separately baselined because they are not included with the Operating System, they need to be tracked, evaluated, and patched like any other software.  Typically, the correct patch source for a driver patch is not the manufacturer of the peripheral device; rather it is the OEM manufacturer of the Cyber Asset the peripheral is installed in.  A very good example of device drivers you do not want to pick up from Microsoft are drivers for your display subsystem and your networking subsystem.  You want to get drivers that are certified by HP, Dell, etc., for the Cyber Asset you acquired from the hardware vendor.

“If a Registered Entity chooses to not install security patches because “it is too hard”, “it takes too many resources”, “don’t think it is necessary”, etc., and they do not mitigate the vulnerability that is not being patched, then they will be awarded a Potential Non-Compliance and may receive “Intentional Non-Compliance” as an aggravating factor when determining the sanction.

“I would remind (your readers) that 80% or more of all successful system compromises result from failure to patch, failure to use effective anti-malware processes, and failure to limit system access to only that needed (both access in general and access permissions for those users so authorized).  If entities are not patching their drivers, then not only are they not compliant, they are not secure.”

The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] NERC now may – or again, they may not – be saying that the Guidance and Technical Basis included with the CIP v5 and v6 standards has a lesser status than Implementation Guidance, like the document developed by the CIP-013 SDT. I believe this ranking of the levels of guidance documentation is meant to be analogous to Dante’s Seven Circles of Hell. I will explore this interesting issue in an upcoming post. Don’t miss it!

[ii] I want to thank a NERC compliance person at a large NERC entity for pointing this sentence out to me.

[iii] On the other hand, I’m not going back one inch from what I said in the last two paragraphs of the previous post: Having to comply with mandatory prescriptive standards like CIP-007 R2 and CIP-010 R1, that can require huge amounts of work and money for compliance, is inevitably forcing NERC entities to spend less on security practices that aren’t part of CIP, such as phishing and ransomware. No entity that I know of has a blank check to spend every penny of their revenue on CIP compliance. Choices have to be made, and prescriptive standards force the entity to over-allocate resources to what’s required by CIP, and under-allocate to anything that isn’t required, no matter how important it might be. Security suffers from this arrangement, instead of one (which I’m gradually proposing) in which what is mandatory is that the entity spend a sufficient amount of money and effort to address the most important security threats it faces.

Sunday, July 16, 2017

Are you Patching Device Drivers?


I’ve known for a while that in theory CIP-007 R2, the patch management requirement, applies to device drivers, and that could potentially pose problems. However, I hadn’t thought about this very much until last week, when a CIP compliance person at a large NERC entity inquired whether I knew if other entities were including device drivers in their CIP-007 R2 programs. In other words, do other NERC entities do the following tasks for device drivers: get an inventory of all that are installed on any device within the ESP every month; every 35 days, contact the vendor of the driver to determine whether there is a new security patch available; if there is a patch available, download it and determine whether it is applicable or not; if it is applicable, within another 35 days either install the patch or develop a mitigation plan for the vulnerability(ies) addressed by the patch and; implement the mitigation plan?

The entity pointed out that it is very difficult to comply with this requirement for device drivers because they are often made by small, obscure companies, and the system vendors don’t always automatically provide information on all of the drivers that are included with their systems. Moreover, security patches aren’t released as often as for other software, meaning it’s certain that on many months there will be no new patch available.

I honestly thought the answer to this question was a no-brainer. I had never heard a single entity complaining about this issue previously (although I’ve heard a lot of complaints about CIP-007 R2 in general). I automatically assumed this was a “Don’t ask, don’t tell” issue, in which there is tacit agreement among NERC entities and the auditors that patching device drivers won’t be discussed in audits.

It turns out I was wrong! I reached out to auditors in two regions, and they both said similar things: 1) Patching device drivers is required by CIP-007 R2; 2) It’s also required by good security practices; and 3) Any entity that isn’t doing it now would be well advised to get cracking on device drivers now, or big penalties loom in the future.

Of course, the fact that the auditors said 1) and 2) doesn’t surprise me; what else could they say? However, I was surprised at 3), since I’m sure there are other entities (including large ones) that aren’t patching device drivers now. But if auditors from two regions – both of whom I have great respect for - say they won’t take any excuses for not patching device drivers, then that constitutes solid evidence that NERC entities shouldn’t test them. QED.

However, I think this illustrates a much larger point. First, let’s assume that the entity that contacted me about this was right, and having to go through the CIP-007 R2 process every month for every device driver installed in the ESP is a) very burdensome and b) not necessary, since drivers aren’t patched very often. Wouldn’t it be nice if a NERC entity could balance the need to patch device drivers against all of the other things they need to do on a regular basis to maintain good cyber security, and say “Well, given that new device driver patches aren’t likely to come out most months, we’ll put them on a schedule of just checking for availability every six months? This will allow us time to address some other very urgent cyber issues, such as the need to develop and implement a strategy for preventing ransomware infections”?

Of course, we know what the answer from any NERC auditor will be to this suggestion: “The requirement is the requirement. If you don’t follow the requirement, you will be fined.” Is this because all NERC auditors are big meanies? No, it isn’t. It’s because they have to follow the NERC practices outlined in the Rules of Procedure and especially the Compliance Monitoring Enforcement Plan (CMEP). And those practices say that if an entity makes a decision not to comply with part of a requirement, they’ll get the book thrown at them.

What this means is that, because the NERC CIP standards are often prescriptive and are always enforced in a prescriptive fashion (since that’s what CMEP is based on), NERC entities will, in deciding how they are going to spend their allocated cyber security dollars, spend money first on complying fully with the NERC CIP requirements. This will happen no matter how expensive it may be to do this or how – in some cases – the entities will end up spending much more money on tasks like patching device drivers (because they’re required by CIP) than on items like preparing for ransomware attacks (because they’re not), even if they might believe their risk from ransomware is much greater than their risk from attack on a device driver.[i]

What can be done to change this situation? At one point, I thought just rewriting all of the NERC requirements as non-prescriptive ones would do the trick. But then I realized that this wasn’t enough – the whole NERC compliance regime (especially CMEP) would have to be altered, at least for the CIP standards. But that assumes that NERC is an old dog that can easily learn new tricks, and it can easily make the transition to having one division (the one that audits the Operations and Planning standards) that deals solely with prescriptive standards, and another division (that audits the CIP standards) that takes a very broad, holistic view of the entity’s cyber practices in total, and decides on a risk-informed basis whether the entity is doing a good job of allocating its limited cyber funds to adequately protect the BES. Is this a good assumption?

And even more importantly, maybe the real problem is the fact that currently cyber regulations are very different, and enforced by different bodies for each critical infrastructure sector. Attacks like Wannacry and Not Petya haven’t focused on just one sector – they’ve been wonderfully ecumenical and attacked every sector where there were vulnerabilities. Maybe there should be a single agency regulating cyber security for all critical infrastructure? In fact, there was an Op-Ed piece in the Wall Street Journal last week[ii] that called for a US Department of Cybersecurity. While I would have ridiculed the idea just a few months ago, Wannacry and Not Petya have made me think this is worth considering.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] For a lengthy discussion of this idea, see this post.

[ii] It was called “America Isn’t Ready for a ‘Cyber 9/11’” and appeared on July 12. Since the WSJ’s online service is behind a paywall, I can’t include a link to the article, but you may be able to find it the old fashioned way – go to your local library! If they still have those anymore, that is. 

Sunday, July 9, 2017

Expanding on my Last Post


Two days ago I published a post pointing out what I believe is a significant unintended contradiction between CIP-013, the new supply chain security management standard, and CIP-010 R1.6, one of three new requirement parts that are part of the CIP-013 “package”, which are being balloted with that standard. Moreover, I believe this contradiction could, if not corrected, lead to NERC entities having to expend significant amounts of time and money complying with r1.6 in a way that was never intended by the SDT.

One of the great advantages of writing a blog on NERC CIP, but not actually being part of NERC or one of the regulated entities, is that I get to complain about various problems I find without having to propose any solution. In the last post, this is exactly what I did. However, I did think afterwards about how this problem might be fixed and – as always seems to happen with any question about NERC CIP that I think about for more than a few minutes – I realized there is a Bigger Story behind this problem. And, since I’m a Bigger Story kind of guy, I’ve been pursing that. Here is what I’ve found so far:

The contradiction at the root of this problem wasn’t in the first draft of CIP-013 (which you can retrieve from the SDT’s web page). In that draft, all new requirements were included in CIP-013 itself, not in other CIP standards. What is now CIP-010 R1.6 was CIP-013 R3 in the first draft, and it read:

“Each Responsible Entity shall implement one or more documented process(es) for verifying the integrity and authenticity of the following software and firmware before being placed in operation on high and medium impact BES Cyber Systems: [Violation Risk Factor: Medium] [Time Horizon: Operations Planning] 3.1. Operating System(s); 3.2. Firmware; 3.3. Commercially available or open-source application software; and 3.4. Patches, updates, and upgrades to 3.1 through 3.3.”

The question I asked about CIP-010-3 R1.6 in the previous post was what it applied to, and the answer was “Every piece of software or patch installed on every Medium or High impact BCS”; there is no provision in CIP-010-3 R1.6 (or in any of the other requirements or requirement parts in CIP-002 through CIP-011) for ranking risk of systems and/or vendors, and only performing the required actions for the riskiest of them. Yet, as I discussed in the previous post, for the second official draft of CIP-013 R1 and R2 the Implementation Guidance makes clear that the entity is expected (although not required, of course) in its supply chain cyber security risk management plan to prioritize vendors and/or systems by risk, and then determine appropriate controls based on the risk posed by each vendor or system type.

However, when you ask this same question about CIP-013 R3 from the first draft (just quoted), the answer is very different. Since the “documented processes” referred to in R3 are part of the plan developed in CIP-013 R1, and since that plan is risk-based, this means (in my opinion, of course) that R3 is a risk-based requirement as well, and R3 doesn’t necessarily require the entity to verify integrity and authenticity of every piece of software and patch installed on any High or Medium impact BCS (as CIP-010 R1.6 does). R3 only requires this be done for the more risky vendors and/or systems.

The upshot of all this is that the contradiction I identified in the previous post is a direct (and I’m sure unintentional) result of the fact that CIP-013 R3 in the first draft was moved to CIP-010 R1.6 in the second draft. And now that I think of it, this result was just about inevitable. CIP-010 R1 is a prescriptive requirement (along with CIP-007 R2, it is one of my two poster children for prescriptive requirements); any requirement part added to a prescriptive requirement will itself have to be prescriptive. It will simply have to apply to every piece of software or patch installed on every Medium or high impact BCS, period.

But what if CIP-013 R3 from the first draft had been moved under a non-prescriptive requirement such as CIP-007 R3 (anti-malware)? I actually don’t think it would have made a difference. There is simply no provision in CIP-002 through CIP-011 for the entity to be able to consider risk in how it complies with any of the requirements; they all apply to everything that is listed in the Applicable Systems column, no exceptions. The fact that CIP-013 R3 was moved to one of the other CIP standards means that it could no longer be tied to the entity’s supply chain risk management plan, and thus lost the benefit of that plan’s risk-based approach.

So what are the lessons of this? There are two that I can think of:

  1. Be careful what you wish for. A lot of entities commented on the first draft that they would much prefer that CIP-013 R3 and R4 (the requirement for controls on remote access by vendors) be moved into the other CIP standards, rather than continue to be part of CIP-013. I supported that move, since it seemed to me that CIP-013 itself would be much more coherent if these two requirements were relocated. However, it now seems there were severe unintended consequences of this move.
  2. Any attempt to make the CIP standards entirely non-prescriptive and risk-based (as I would like to see happen) will very likely run up against the fact that the whole NERC standards environment – especially CMEP, which governs how all NERC standards are audited and enforced – has a very difficult time accommodating anything but purely prescriptive requirements[i]. In fact, I would say they the current NERC standards environment will no more accommodate true risk-based requirements than a women’s restroom will accommodate men. I have raised this issue before, and I’m sure I’ll raise it more in the future: In order to really fix the problems with CIP, we will need not only new standards but a completely different auditing process (which requires a new CMEP and perhaps a new Rules of Procedure).

In fact, as I will explain in an upcoming post, I’m now wondering if NERC can ever be flexible enough to make the required changes. Their actions regarding CIP in the near future will probably be key to whether NERC will still retain authority for cyber regulation of the power grid, say 2-3 years from now. I think the time for NERC to make changes is quickly running out.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.


[i] You may want to point out that the problem I described in the last post isn’t caused by the “NERC standards environment” but the particular standard (CIP-010) that the requirement in question (R3 in the first draft of CIP-013) was inserted into. After all, you might point out, if R3 had remained part of CIP-013 this problem wouldn’t have happened. This is a good point, but I also know that at least some people in the NERC ERO Enterprise are quite unhappy with the whole standard, so I wouldn’t say CIP-013 itself is established yet, even though it is likely to be approved by the NERC ballot body and Board of Trustees.

Friday, July 7, 2017

A Contradiction in CIP-013


I have recently been talking with NERC entities about how they will actually implement CIP-013, the supply chain security management standard. This standard is close to final approval by NERC, and it’s almost certain it - and the three related new requirement parts added to existing standards: CIP-005 R2.4 and 2.5 and CIP-010 R1.6 - will be delivered to FERC for approval in September. Then there will be a wait for FERC to approve the standard (I’m guessing around 9 months), followed by an 18-month implementation period.

This means that implementation of CIP-013 is still around 3 years away. This might seem like a lot of time, but I’ve come to realize that for most NERC entities – especially the very large ones – coming into compliance with CIP-013 will require a huge effort. This is because it will bring in departments that have never had to deal with CIP – or indeed any other NERC standard – before, especially Supply Chain and Legal. So a number of larger NERC entities are already starting to think about what they will have to do to comply, especially the organizational changes that are needed.

I was preparing for a CIP-013 workshop for one of these large NERC entities recently, when I came across a significant contradiction – specifically, a contradiction between how an entity needs to comply with CIP-013-1 R1.2.5, vs. with CIP-010-3 R1.6. And, depending on how this contradiction is resolved (and if it’s resolved, which isn’t at all certain), NERC entities might be forced to spend large amounts of time and money every year on compliance with CIP-010 R1.6.

Let’s start with CIP-013 R1. I won’t reprint the entire requirement but only R1.2.5 itself, along with R1 and R1.2, since they “govern” it[i]:

  • R1: “Each Responsible Entity shall develop one or more documented supply chain cyber security risk management plan(s) for high and medium impact BES Cyber Systems. The plan(s) shall include:”
  • R1.2: “One or more process(es) used in procuring BES Cyber Systems that address the following, as applicable:”
  • R1.2.5: “Verification of software integrity and authenticity of all software and patches provided by the vendor for use in the BES Cyber System…”

To summarize these three items:

  1. The entity needs to develop a supply chain cyber security risk management plan(s) for high and medium impact BCS.
  2. The plan needs to include a process(es) to procure BCS that addresses…
  3. Verification of integrity and authenticity of all software and patches provided by the vendor for use in the BES Cyber System.

What I’m interested in here is what this applies to. Specifically, is the plan developed in R1 required to apply to every piece of software on every BES Cyber System, or can the plan be risk-based, meaning that the entity might be able to make exceptions for some systems or software packages deemed to be lower risk – or even for certain vendors for which the risk of loss of integrity or authenticity in software and patches is deemed low?

If you look in the strict wording of R1 itself, you will find nothing that clearly indicates the answer to the above question is Yes. But if you look in the CIP-013-1 Implementation Guidance[ii] (and even in FERC Order 829, which ordered NERC to develop this standard), you will see pretty clear evidence that Yes is indeed the answer.

Specifically, on page 2 of the Implementation Guidance, you’ll find (in the discussion of R1 itself): “To achieve the flexibility needed for supply chain cyber security risk management, Responsible Entities can use a “risk-based approach”. One element of, or approach to, a risk-based cyber security risk management plan is system-based, focusing on specific controls for high and medium impact BES Cyber Systems to address the risks presented in procuring those systems or services for those systems. A risk-based approach could also be vendor-based, focusing on the risks posed by various vendors of its BES Cyber Systems. Entities may combine both of these approaches into their plans. This flexibility is important to account for the varying ‘needs and characteristics of responsible entities and the diversity of BES Cyber System environments, technologies, and risk (FERC Order No. 829 P 44)’.” This passage indicates fairly clearly that the entity’s approach should be “risk-based” and “flexible”, meaning systems and/or vendors of different risk levels can be treated in different ways.

Now let’s go to CIP-010-3 R1.6, which is part of the “package” that implements CIP-013. For those of you familiar with the other CIP standards, this will seem very familiar. There is a box with three columns. The first column shows that High and Medium BCS are the applicable systems. The second column shows the actual requirement part, which reads

Prior to a change that deviates from the existing baseline configuration associated with baseline items in Parts 1.1.1, 1.1.2, and 1.1.5, and when the method to do so is available to the Responsible Entity from the software source:

1.6.1.  Verify the identity of the software source; and
1.6.2.  Verify the integrity of the software obtained from the software source.”

This means that the Responsible Entity needs to verify the identity and integrity of every piece of software or patch installed on every Medium or High impact BES Cyber System; and of course this needs to be done every time a new patch is installed. The only exceptions to this rule are when – for whatever reason – the “method to do so” (i.e. do the verification) isn’t available.

Then what happened to the “risk-based” and “flexible” approach that the Implementation Guidance advocates for CIP-013 R1? There is no mention of that in CIP-010 R1.6 itself, but there is also no mention of it in the draft Guidance and Technical Basis at the bottom of the standard.[iii] So it seems that, even though the entity can take risk (of systems and vendors) into account in the plan it draws up to comply with CIP-013 R1, it can’t do the same when it comes to actually applying patches every month. To apply the patches (and other software), they need to verify identity and integrity of every patch, for every piece of software installed on a BCS, as long as the means to perform this verification are available. And this to be done every month, if patching is done every month.

I have often heard from larger NERC entities that compliance with CIP-007 R2, the patch management requirement, is by far more burdensome than compliance with any of the other CIP requirements[iv] – and that is because the patch management steps need to be performed every month, for every piece of software installed on every Medium and High impact BCS[v]. It now seems that these entities will have another step – and another very burdensome one - added to this process for many if not most systems, regardless of risk of the system or the vendor. And this is in spite of whatever risk-based language the entity may have in their supply chain risk management plan developed in CIP-013 R1.

I know the SDT didn’t intend to have this contradiction between two parts of the supply chain standards, but I don’t see a way around it. Do you?


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.


[i] These quotations come from the SDT’s draft of June 28, the most recent that I’ve seen.

[ii] The Guidance is also being revised, although I doubt the particular passages I quote here will be substantially changed. You can find the last (and so far only) official draft of the Implementation Guidance at the SDT’s page in the NERC website (you can also find the second official drafts of CIP-013-1, CIP-005-6 and CIP-010-3, although not the unofficial drafts developed by the SDT since then).

[iii] The Guidance and Technical Basis for CIP-010 R1.6 wasn’t included in the second draft (i.e. the most recent draft posted on the web site). I’m referring here to the most recent “unofficial” draft version of CIP-010-3, which was mailed out to the SDT’s Plus List recently.

[iv] One medium-sized entity told me that half of all the NERC compliance documentation they produce in their control centers is due to this one requirement.

[v] CIP-007 R2 also applies to Protected Cyber Assets, unlike CIP-010 R1.6, so there is that difference. As my former boss used to say, “Thank God for small favors.”