Thursday, July 27, 2017

Sean McBride Writes a Great White Paper


I have written at least a couple times previously about things Sean McBride has written, including in this post from 2014 (which by the way I just read and found quite interesting, and every bit as relevant now as back then). He has always been one of the leading experts on ICS security and especially ICS vulnerabilities. He founded the firm Critical Intelligence, which is now part of FireEye (although the name is no longer used).

He recently published a white paper that provides a great overview of what he calls “cyber-attacks”, which he defines as attacks that cause physical damage, and that originate from “state-nexus” perpetrators. He shows that these have been increasing in frequency and severity, and he makes the argument that this trend will only increase. I recommend you read this!



The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

Monday, July 24, 2017

GridSecCon 2017


It may seem to be a little early to toot the horn for NERC’s GridSecCon  , since it won’t occur until October 17-20 (this year it will be in St. Paul, MN, a really beautiful city). However, I did notice that the early bird sign-up special of $300 will expire on Friday, July 28.

I have attended four GridSecCons, and after the first one I vowed I wouldn’t miss another. Each year it only seems to get better. So I already had my ticket when I was notified last Friday that I was selected to participate in a panel (on Thursday afternoon) on Supply Chain Security (Sharon Chand of Deloitte will also be speaking, in a panel on Insider Threat on Wednesday morning).

To be succinct, I have said for several years that GridSecCon is the single most important annual gathering of people involved in cyber security for the electric power industry. The presentations are always excellent, but what is most important are the networking opportunities. It sometimes seems as if everybody I would ever want to meet in the industry is at this conference, even though I know that’s not true. But I always meet a lot of interesting people, even ones I had never known about until we met!

If you haven’t yet been to GridSecCon, I highly recommend you try it this year.



The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

Thursday, July 20, 2017

The Department of Cybersecurity?


Sometimes I read an article on line or in the physical paper (yes, I still get those!) that makes me smack my head and say “Of course! This makes perfect sense!” And I like to feel that I would have sooner or later come to the same conclusion – it’s just that the author has already done that for me.

Such was the case when, in the July 12 Wall Street Journal, I read an Op-Ed piece called “America isn’t Ready for a ‘Cyber 9/11’”. The title was a little misleading, and I almost didn’t read it; I figured it was yet another alarming piece about how the whole grid was going to collapse from a massive cyber attack either tomorrow or the day after tomorrow at the very latest.

However, it wasn’t about that at all. Rather, it was making the point, which I’ve made a couple times, that the Wannacry and not-Petya attacks (which they call Petya in the article – I’ll excuse this minor error) were quite different from most previous attacks in that they weren’t aimed at particular sectors or companies, or even countries (although not-Petya was clearly aimed at the Ukraine at first, but didn’t confine itself there for very long).

And the authors draw a perfectly reasonable conclusion from this: If such broad attacks are going to be the norm from now on, perhaps the current state of extremely fragmented cyber security oversight isn’t the best way to protect the country going forward (they state there are at least 11 Federal agencies with jurisdiction over some aspect of cyber, but they also seem to have left out two more that you may be familiar with: NERC and FERC. Raise your hand if you’ve ever heard of them).

And it doesn’t take long for the authors to come to a conclusion that I would have considered ridiculous a few months ago, but which now seems to me to make much more sense: There should be a Federal Department of Cybersecurity. When a big, broad attack happens, why have a bunch of different agencies that need to figure out for themselves what it’s about and how to deal with it? Why not have one agency responsible for figuring out what’s going on and how to respond, which then has different divisions responsible for different sectors, whose job would be to carry out that response in their sector (this part about the divisions is my extrapolation from their article, which doesn’t itself discuss how the Department would be structured)?

But let’s not stop there. This same Department of Cybersecurity should also be responsible for developing and enforcing cyber security standards - in some cases mandatory, in others voluntary. Again, the sector-level divisions would be responsible for interpreting, evangelizing and enforcing those standards in the different sectors.

Of course, electric power would be one of those divisions. However, I think it would be part of a “super division” of what might be called “process infrastructure” (which is a terrible name, but all I can think of at this hour). This would include all of the critical infrastructure sectors that are responsible for maintaining a particular process. Of course, the process for the power sector is the Bulk Electric System. For oil refineries, it’s the production of petroleum byproducts. For natural gas pipelines, it’s the interstate distribution of natural gas. A couple other sectors in this group would be chemical plants and petroleum pipelines.

Within this “super division”, there would be a core group of ICS security experts that would address threats to ICS assets, and formulate both responses to attacks and standards for maintaining cyber security (again, mandatory or not, perhaps depending on the criticality of the sector. Unfortunately – you guessed it – the electric sector is about as critical as it gets). Then there would be groups of “implementers” and assessors (which, truth be told, would also be auditors) who would carry out the responses and interpret/enforce the standards in the particular sectors.

And what would be the standards they would enforce? This may surprise you greatly, but I don’t actually advocate that the NERC CIP standards be generalized and made applicable to all critical infrastructure sectors. What do I propose instead? I am proposing a very different set of standards (or really, just one standard with one requirement), based on a very different compliance regime than the NERC one (again this probably surprises you greatly). I currently have identified six principles that would form the basis for that regime, which I listed (without any embellishment) in this post.

You’ll notice these principles are general, applicable to any process industry. When I wrote the above post, I was thinking that each of those industries (gas pipelines, electric power, etc) would implement the same principles, but tailored to their industry (i.e. their particular infrastructure, like the BES when you’re talking about electric power). I now realize that it makes a lot more sense to have a single group of ICS experts that handles the functions that are common to all the process industries, with sector specialists who apply them to the different sectors; i.e. the structure I described above.

Will this happen? I’m not positive there will be a Department of Cybersecurity, although I think that would be the best solution. An intermediate solution would be to combine regulation of the process infrastructure sectors into this “super division”, and have either the Department of Energy or the Department of Homeland Security “house” it. This wouldn’t provide the synergies with industries like banking and healthcare that a Dept. of Cyber would provide, but it might be a lot easier to implement and would at least unify the process infrastructure industries.

One thing I am fairly sure of: In three years, neither NERC nor FERC will be responsible for regulation of the cybersecurity of the power grid. This is not because they’ve made mistakes as the regulators (although they have – again, I realize it will surprise you greatly to hear me say this), but because the logic of combining the sectors in this way is too compelling. At that time, you’ll look back at your happy days of laboring in the salt mines of NERC CIP and wonder what you were thinking.



The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

Monday, July 17, 2017

Device Drivers, Part II


In my last post, I discovered – to my surprise, I’ll admit – that auditors are expecting NERC entities to patch device drivers to comply with CIP-007-6 R2, despite the fact that this won’t always be easy. However, there’s a related question: are they also expecting entities to include device drivers in the configuration baselines they develop for CIP-010-2 R1 compliance?

The answer to that question is yes. And here I can quote Lew Folkerth of RF, an ex-auditor now leading outreach for CIP at RF (and a major contributor to what must have been at least 20-25 of my posts, although it seems more like 500). He says:

“Device drivers that are not included in the operating system are software that should be identified in the baselines and patched accordingly. The entities I’m familiar with handle these device drivers as part of normal patch management. You identify any drivers in use that are not part of the OS as separate software in the baseline. You identify a patch source for the driver, usually the card or device manufacturer. Then you monitor that patch source on a monthly basis.

Some vendors will track patch sources for device drivers as part of their automated service. I’m sure pricing will vary.”

Note that Lew limits this to drivers that aren’t included in an O/S. He says this in part because the Guidelines and Technical Basis[i] for CIP-010-2 R1 includes the sentence ““The SDT does not intend for notepad, calculator, DLL, device drivers, or other applications included in an operating system package as commercially available or open-source application software to be included.”[ii]

But here’s another question: Does this “exemption” for drivers that are part of the O/S (and also open-source application software) apply to CIP-007 R2 as well? Lew’s answer – as well as that of an experienced auditor from another region – is a definite yes. So I may have been wrong in assuming that having to cover device drivers under R2 would present a huge burden to NERC entities. It seems that what they need to track are primarily hardware device drivers not included with an OS – and this should be a much more manageable quantity.[iii]

Postscript
The quote below is from an email that one of the auditors mentioned in my previous post sent me while I was writing the post. It expands on his thinking regarding device drivers in CIP-007 R2:

“CIP-007-6 / R2 requires a patch management process for tracking, evaluating, and installing cyber security patches for applicable Cyber Assets.  Nowhere in that Requirement statement is there an exclusion for device drivers.  Device drivers are software and they are just as vulnerable and just as exploitable as any other installed software.

“Device drivers come in two flavors.  Many, but not all, are included with the operating system (e.g., Microsoft Windows 7).  If you are tracking patches for the operating system, then you will also encounter patches for the drivers included in the operating system.

“Some drivers, especially network interface drivers, are separately installed.  Just as they need to be separately baselined because they are not included with the Operating System, they need to be tracked, evaluated, and patched like any other software.  Typically, the correct patch source for a driver patch is not the manufacturer of the peripheral device; rather it is the OEM manufacturer of the Cyber Asset the peripheral is installed in.  A very good example of device drivers you do not want to pick up from Microsoft are drivers for your display subsystem and your networking subsystem.  You want to get drivers that are certified by HP, Dell, etc., for the Cyber Asset you acquired from the hardware vendor.

“If a Registered Entity chooses to not install security patches because “it is too hard”, “it takes too many resources”, “don’t think it is necessary”, etc., and they do not mitigate the vulnerability that is not being patched, then they will be awarded a Potential Non-Compliance and may receive “Intentional Non-Compliance” as an aggravating factor when determining the sanction.

“I would remind (your readers) that 80% or more of all successful system compromises result from failure to patch, failure to use effective anti-malware processes, and failure to limit system access to only that needed (both access in general and access permissions for those users so authorized).  If entities are not patching their drivers, then not only are they not compliant, they are not secure.”

The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] NERC now may – or again, they may not – be saying that the Guidance and Technical Basis included with the CIP v5 and v6 standards has a lesser status than Implementation Guidance, like the document developed by the CIP-013 SDT. I believe this ranking of the levels of guidance documentation is meant to be analogous to Dante’s Seven Circles of Hell. I will explore this interesting issue in an upcoming post. Don’t miss it!

[ii] I want to thank a NERC compliance person at a large NERC entity for pointing this sentence out to me.

[iii] On the other hand, I’m not going back one inch from what I said in the last two paragraphs of the previous post: Having to comply with mandatory prescriptive standards like CIP-007 R2 and CIP-010 R1, that can require huge amounts of work and money for compliance, is inevitably forcing NERC entities to spend less on security practices that aren’t part of CIP, such as phishing and ransomware. No entity that I know of has a blank check to spend every penny of their revenue on CIP compliance. Choices have to be made, and prescriptive standards force the entity to over-allocate resources to what’s required by CIP, and under-allocate to anything that isn’t required, no matter how important it might be. Security suffers from this arrangement, instead of one (which I’m gradually proposing) in which what is mandatory is that the entity spend a sufficient amount of money and effort to address the most important security threats it faces.

Sunday, July 16, 2017

Are you Patching Device Drivers?


I’ve known for a while that in theory CIP-007 R2, the patch management requirement, applies to device drivers, and that could potentially pose problems. However, I hadn’t thought about this very much until last week, when a CIP compliance person at a large NERC entity inquired whether I knew if other entities were including device drivers in their CIP-007 R2 programs. In other words, do other NERC entities do the following tasks for device drivers: get an inventory of all that are installed on any device within the ESP every month; every 35 days, contact the vendor of the driver to determine whether there is a new security patch available; if there is a patch available, download it and determine whether it is applicable or not; if it is applicable, within another 35 days either install the patch or develop a mitigation plan for the vulnerability(ies) addressed by the patch and; implement the mitigation plan?

The entity pointed out that it is very difficult to comply with this requirement for device drivers because they are often made by small, obscure companies, and the system vendors don’t always automatically provide information on all of the drivers that are included with their systems. Moreover, security patches aren’t released as often as for other software, meaning it’s certain that on many months there will be no new patch available.

I honestly thought the answer to this question was a no-brainer. I had never heard a single entity complaining about this issue previously (although I’ve heard a lot of complaints about CIP-007 R2 in general). I automatically assumed this was a “Don’t ask, don’t tell” issue, in which there is tacit agreement among NERC entities and the auditors that patching device drivers won’t be discussed in audits.

It turns out I was wrong! I reached out to auditors in two regions, and they both said similar things: 1) Patching device drivers is required by CIP-007 R2; 2) It’s also required by good security practices; and 3) Any entity that isn’t doing it now would be well advised to get cracking on device drivers now, or big penalties loom in the future.

Of course, the fact that the auditors said 1) and 2) doesn’t surprise me; what else could they say? However, I was surprised at 3), since I’m sure there are other entities (including large ones) that aren’t patching device drivers now. But if auditors from two regions – both of whom I have great respect for - say they won’t take any excuses for not patching device drivers, then that constitutes solid evidence that NERC entities shouldn’t test them. QED.

However, I think this illustrates a much larger point. First, let’s assume that the entity that contacted me about this was right, and having to go through the CIP-007 R2 process every month for every device driver installed in the ESP is a) very burdensome and b) not necessary, since drivers aren’t patched very often. Wouldn’t it be nice if a NERC entity could balance the need to patch device drivers against all of the other things they need to do on a regular basis to maintain good cyber security, and say “Well, given that new device driver patches aren’t likely to come out most months, we’ll put them on a schedule of just checking for availability every six months? This will allow us time to address some other very urgent cyber issues, such as the need to develop and implement a strategy for preventing ransomware infections”?

Of course, we know what the answer from any NERC auditor will be to this suggestion: “The requirement is the requirement. If you don’t follow the requirement, you will be fined.” Is this because all NERC auditors are big meanies? No, it isn’t. It’s because they have to follow the NERC practices outlined in the Rules of Procedure and especially the Compliance Monitoring Enforcement Plan (CMEP). And those practices say that if an entity makes a decision not to comply with part of a requirement, they’ll get the book thrown at them.

What this means is that, because the NERC CIP standards are often prescriptive and are always enforced in a prescriptive fashion (since that’s what CMEP is based on), NERC entities will, in deciding how they are going to spend their allocated cyber security dollars, spend money first on complying fully with the NERC CIP requirements. This will happen no matter how expensive it may be to do this or how – in some cases – the entities will end up spending much more money on tasks like patching device drivers (because they’re required by CIP) than on items like preparing for ransomware attacks (because they’re not), even if they might believe their risk from ransomware is much greater than their risk from attack on a device driver.[i]

What can be done to change this situation? At one point, I thought just rewriting all of the NERC requirements as non-prescriptive ones would do the trick. But then I realized that this wasn’t enough – the whole NERC compliance regime (especially CMEP) would have to be altered, at least for the CIP standards. But that assumes that NERC is an old dog that can easily learn new tricks, and it can easily make the transition to having one division (the one that audits the Operations and Planning standards) that deals solely with prescriptive standards, and another division (that audits the CIP standards) that takes a very broad, holistic view of the entity’s cyber practices in total, and decides on a risk-informed basis whether the entity is doing a good job of allocating its limited cyber funds to adequately protect the BES. Is this a good assumption?

And even more importantly, maybe the real problem is the fact that currently cyber regulations are very different, and enforced by different bodies for each critical infrastructure sector. Attacks like Wannacry and Not Petya haven’t focused on just one sector – they’ve been wonderfully ecumenical and attacked every sector where there were vulnerabilities. Maybe there should be a single agency regulating cyber security for all critical infrastructure? In fact, there was an Op-Ed piece in the Wall Street Journal last week[ii] that called for a US Department of Cybersecurity. While I would have ridiculed the idea just a few months ago, Wannacry and Not Petya have made me think this is worth considering.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] For a lengthy discussion of this idea, see this post.

[ii] It was called “America Isn’t Ready for a ‘Cyber 9/11’” and appeared on July 12. Since the WSJ’s online service is behind a paywall, I can’t include a link to the article, but you may be able to find it the old fashioned way – go to your local library! If they still have those anymore, that is. 

Sunday, July 9, 2017

Expanding on my Last Post


Two days ago I published a post pointing out what I believe is a significant unintended contradiction between CIP-013, the new supply chain security management standard, and CIP-010 R1.6, one of three new requirement parts that are part of the CIP-013 “package”, which are being balloted with that standard. Moreover, I believe this contradiction could, if not corrected, lead to NERC entities having to expend significant amounts of time and money complying with r1.6 in a way that was never intended by the SDT.

One of the great advantages of writing a blog on NERC CIP, but not actually being part of NERC or one of the regulated entities, is that I get to complain about various problems I find without having to propose any solution. In the last post, this is exactly what I did. However, I did think afterwards about how this problem might be fixed and – as always seems to happen with any question about NERC CIP that I think about for more than a few minutes – I realized there is a Bigger Story behind this problem. And, since I’m a Bigger Story kind of guy, I’ve been pursing that. Here is what I’ve found so far:

The contradiction at the root of this problem wasn’t in the first draft of CIP-013 (which you can retrieve from the SDT’s web page). In that draft, all new requirements were included in CIP-013 itself, not in other CIP standards. What is now CIP-010 R1.6 was CIP-013 R3 in the first draft, and it read:

“Each Responsible Entity shall implement one or more documented process(es) for verifying the integrity and authenticity of the following software and firmware before being placed in operation on high and medium impact BES Cyber Systems: [Violation Risk Factor: Medium] [Time Horizon: Operations Planning] 3.1. Operating System(s); 3.2. Firmware; 3.3. Commercially available or open-source application software; and 3.4. Patches, updates, and upgrades to 3.1 through 3.3.”

The question I asked about CIP-010-3 R1.6 in the previous post was what it applied to, and the answer was “Every piece of software or patch installed on every Medium or High impact BCS”; there is no provision in CIP-010-3 R1.6 (or in any of the other requirements or requirement parts in CIP-002 through CIP-011) for ranking risk of systems and/or vendors, and only performing the required actions for the riskiest of them. Yet, as I discussed in the previous post, for the second official draft of CIP-013 R1 and R2 the Implementation Guidance makes clear that the entity is expected (although not required, of course) in its supply chain cyber security risk management plan to prioritize vendors and/or systems by risk, and then determine appropriate controls based on the risk posed by each vendor or system type.

However, when you ask this same question about CIP-013 R3 from the first draft (just quoted), the answer is very different. Since the “documented processes” referred to in R3 are part of the plan developed in CIP-013 R1, and since that plan is risk-based, this means (in my opinion, of course) that R3 is a risk-based requirement as well, and R3 doesn’t necessarily require the entity to verify integrity and authenticity of every piece of software and patch installed on any High or Medium impact BCS (as CIP-010 R1.6 does). R3 only requires this be done for the more risky vendors and/or systems.

The upshot of all this is that the contradiction I identified in the previous post is a direct (and I’m sure unintentional) result of the fact that CIP-013 R3 in the first draft was moved to CIP-010 R1.6 in the second draft. And now that I think of it, this result was just about inevitable. CIP-010 R1 is a prescriptive requirement (along with CIP-007 R2, it is one of my two poster children for prescriptive requirements); any requirement part added to a prescriptive requirement will itself have to be prescriptive. It will simply have to apply to every piece of software or patch installed on every Medium or high impact BCS, period.

But what if CIP-013 R3 from the first draft had been moved under a non-prescriptive requirement such as CIP-007 R3 (anti-malware)? I actually don’t think it would have made a difference. There is simply no provision in CIP-002 through CIP-011 for the entity to be able to consider risk in how it complies with any of the requirements; they all apply to everything that is listed in the Applicable Systems column, no exceptions. The fact that CIP-013 R3 was moved to one of the other CIP standards means that it could no longer be tied to the entity’s supply chain risk management plan, and thus lost the benefit of that plan’s risk-based approach.

So what are the lessons of this? There are two that I can think of:

  1. Be careful what you wish for. A lot of entities commented on the first draft that they would much prefer that CIP-013 R3 and R4 (the requirement for controls on remote access by vendors) be moved into the other CIP standards, rather than continue to be part of CIP-013. I supported that move, since it seemed to me that CIP-013 itself would be much more coherent if these two requirements were relocated. However, it now seems there were severe unintended consequences of this move.
  2. Any attempt to make the CIP standards entirely non-prescriptive and risk-based (as I would like to see happen) will very likely run up against the fact that the whole NERC standards environment – especially CMEP, which governs how all NERC standards are audited and enforced – has a very difficult time accommodating anything but purely prescriptive requirements[i]. In fact, I would say they the current NERC standards environment will no more accommodate true risk-based requirements than a women’s restroom will accommodate men. I have raised this issue before, and I’m sure I’ll raise it more in the future: In order to really fix the problems with CIP, we will need not only new standards but a completely different auditing process (which requires a new CMEP and perhaps a new Rules of Procedure).

In fact, as I will explain in an upcoming post, I’m now wondering if NERC can ever be flexible enough to make the required changes. Their actions regarding CIP in the near future will probably be key to whether NERC will still retain authority for cyber regulation of the power grid, say 2-3 years from now. I think the time for NERC to make changes is quickly running out.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.


[i] You may want to point out that the problem I described in the last post isn’t caused by the “NERC standards environment” but the particular standard (CIP-010) that the requirement in question (R3 in the first draft of CIP-013) was inserted into. After all, you might point out, if R3 had remained part of CIP-013 this problem wouldn’t have happened. This is a good point, but I also know that at least some people in the NERC ERO Enterprise are quite unhappy with the whole standard, so I wouldn’t say CIP-013 itself is established yet, even though it is likely to be approved by the NERC ballot body and Board of Trustees.

Friday, July 7, 2017

A Contradiction in CIP-013


I have recently been talking with NERC entities about how they will actually implement CIP-013, the supply chain security management standard. This standard is close to final approval by NERC, and it’s almost certain it - and the three related new requirement parts added to existing standards: CIP-005 R2.4 and 2.5 and CIP-010 R1.6 - will be delivered to FERC for approval in September. Then there will be a wait for FERC to approve the standard (I’m guessing around 9 months), followed by an 18-month implementation period.

This means that implementation of CIP-013 is still around 3 years away. This might seem like a lot of time, but I’ve come to realize that for most NERC entities – especially the very large ones – coming into compliance with CIP-013 will require a huge effort. This is because it will bring in departments that have never had to deal with CIP – or indeed any other NERC standard – before, especially Supply Chain and Legal. So a number of larger NERC entities are already starting to think about what they will have to do to comply, especially the organizational changes that are needed.

I was preparing for a CIP-013 workshop for one of these large NERC entities recently, when I came across a significant contradiction – specifically, a contradiction between how an entity needs to comply with CIP-013-1 R1.2.5, vs. with CIP-010-3 R1.6. And, depending on how this contradiction is resolved (and if it’s resolved, which isn’t at all certain), NERC entities might be forced to spend large amounts of time and money every year on compliance with CIP-010 R1.6.

Let’s start with CIP-013 R1. I won’t reprint the entire requirement but only R1.2.5 itself, along with R1 and R1.2, since they “govern” it[i]:

  • R1: “Each Responsible Entity shall develop one or more documented supply chain cyber security risk management plan(s) for high and medium impact BES Cyber Systems. The plan(s) shall include:”
  • R1.2: “One or more process(es) used in procuring BES Cyber Systems that address the following, as applicable:”
  • R1.2.5: “Verification of software integrity and authenticity of all software and patches provided by the vendor for use in the BES Cyber System…”

To summarize these three items:

  1. The entity needs to develop a supply chain cyber security risk management plan(s) for high and medium impact BCS.
  2. The plan needs to include a process(es) to procure BCS that addresses…
  3. Verification of integrity and authenticity of all software and patches provided by the vendor for use in the BES Cyber System.

What I’m interested in here is what this applies to. Specifically, is the plan developed in R1 required to apply to every piece of software on every BES Cyber System, or can the plan be risk-based, meaning that the entity might be able to make exceptions for some systems or software packages deemed to be lower risk – or even for certain vendors for which the risk of loss of integrity or authenticity in software and patches is deemed low?

If you look in the strict wording of R1 itself, you will find nothing that clearly indicates the answer to the above question is Yes. But if you look in the CIP-013-1 Implementation Guidance[ii] (and even in FERC Order 829, which ordered NERC to develop this standard), you will see pretty clear evidence that Yes is indeed the answer.

Specifically, on page 2 of the Implementation Guidance, you’ll find (in the discussion of R1 itself): “To achieve the flexibility needed for supply chain cyber security risk management, Responsible Entities can use a “risk-based approach”. One element of, or approach to, a risk-based cyber security risk management plan is system-based, focusing on specific controls for high and medium impact BES Cyber Systems to address the risks presented in procuring those systems or services for those systems. A risk-based approach could also be vendor-based, focusing on the risks posed by various vendors of its BES Cyber Systems. Entities may combine both of these approaches into their plans. This flexibility is important to account for the varying ‘needs and characteristics of responsible entities and the diversity of BES Cyber System environments, technologies, and risk (FERC Order No. 829 P 44)’.” This passage indicates fairly clearly that the entity’s approach should be “risk-based” and “flexible”, meaning systems and/or vendors of different risk levels can be treated in different ways.

Now let’s go to CIP-010-3 R1.6, which is part of the “package” that implements CIP-013. For those of you familiar with the other CIP standards, this will seem very familiar. There is a box with three columns. The first column shows that High and Medium BCS are the applicable systems. The second column shows the actual requirement part, which reads

Prior to a change that deviates from the existing baseline configuration associated with baseline items in Parts 1.1.1, 1.1.2, and 1.1.5, and when the method to do so is available to the Responsible Entity from the software source:

1.6.1.  Verify the identity of the software source; and
1.6.2.  Verify the integrity of the software obtained from the software source.”

This means that the Responsible Entity needs to verify the identity and integrity of every piece of software or patch installed on every Medium or High impact BES Cyber System; and of course this needs to be done every time a new patch is installed. The only exceptions to this rule are when – for whatever reason – the “method to do so” (i.e. do the verification) isn’t available.

Then what happened to the “risk-based” and “flexible” approach that the Implementation Guidance advocates for CIP-013 R1? There is no mention of that in CIP-010 R1.6 itself, but there is also no mention of it in the draft Guidance and Technical Basis at the bottom of the standard.[iii] So it seems that, even though the entity can take risk (of systems and vendors) into account in the plan it draws up to comply with CIP-013 R1, it can’t do the same when it comes to actually applying patches every month. To apply the patches (and other software), they need to verify identity and integrity of every patch, for every piece of software installed on a BCS, as long as the means to perform this verification are available. And this to be done every month, if patching is done every month.

I have often heard from larger NERC entities that compliance with CIP-007 R2, the patch management requirement, is by far more burdensome than compliance with any of the other CIP requirements[iv] – and that is because the patch management steps need to be performed every month, for every piece of software installed on every Medium and High impact BCS[v]. It now seems that these entities will have another step – and another very burdensome one - added to this process for many if not most systems, regardless of risk of the system or the vendor. And this is in spite of whatever risk-based language the entity may have in their supply chain risk management plan developed in CIP-013 R1.

I know the SDT didn’t intend to have this contradiction between two parts of the supply chain standards, but I don’t see a way around it. Do you?


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.


[i] These quotations come from the SDT’s draft of June 28, the most recent that I’ve seen.

[ii] The Guidance is also being revised, although I doubt the particular passages I quote here will be substantially changed. You can find the last (and so far only) official draft of the Implementation Guidance at the SDT’s page in the NERC website (you can also find the second official drafts of CIP-013-1, CIP-005-6 and CIP-010-3, although not the unofficial drafts developed by the SDT since then).

[iii] The Guidance and Technical Basis for CIP-010 R1.6 wasn’t included in the second draft (i.e. the most recent draft posted on the web site). I’m referring here to the most recent “unofficial” draft version of CIP-010-3, which was mailed out to the SDT’s Plus List recently.

[iv] One medium-sized entity told me that half of all the NERC compliance documentation they produce in their control centers is due to this one requirement.

[v] CIP-007 R2 also applies to Protected Cyber Assets, unlike CIP-010 R1.6, so there is that difference. As my former boss used to say, “Thank God for small favors.”

Monday, July 3, 2017

Is CIP-013 the First Non-Mandatory NERC Standard?


This is the fourth (and probably not last!) in a series of posts on the question whether CIP-013 is enforceable. The previous one is here. For those of you who aren’t keeping score at home, this series started with a conversation I had with a staff member of one of the NERC Regions, who raised the question whether CIP-013, the upcoming supply chain security management standard, is enforceable (spoiler alert: He doesn’t think so).

An auditor from another region replied to that post with the contrary opinion, and I published both of their responses (without taking sides) in the second post. In the third post (linked above), I rethought my neutrality when I considered the entire email that the staff member had sent me (in the second post, I hadn’t published everything he said). This time, I published the entire email and agreed with the staff member’s statement that CIP-013 (especially R2) seems to conflict with at least one sentence in the NERC Rules of Procedure.

Today, I emailed the staff member to make sure he saw this post. His reply showed me that his question on the enforceability of CIP-013 goes beyond just the RoP. In particular, he quotes the second sentence of the note next to R2 in the standard, which reads “the following issues are beyond the scope of Requirement R2: (1) the actual terms and conditions of a procurement contract; and (2) vendor performance and adherence to a contract.”

The staff member’s point is very simple: If the “actual terms and conditions” of a procurement contract aren’t in scope for this requirement, yet the entity is allowed to claim that they have complied with the requirement – in the case of at least one of their vendors – by getting the vendor to include particular language in their contract, then not only is R2 (and probably R1) not enforceable, it isn’t even mandatory!

This is certainly something I hadn’t thought about. Of course, I think there are a number of entities who secretly wouldn’t mind if all of the CIP standards were non-mandatory, as were all NERC standards before the US Energy Policy Act of 2005. But – sorry to break the bad news – this isn’t allowed since, at least as of today, compliance with the laws passed by Congress is still mandatory for everybody in the US, including FERC and NERC (if this changes in the near future, I’ll let you know).



The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

Sunday, July 2, 2017

Part III: Is CIP-013 Unenforceable?


I have a Constitutional right to change my mind, and I’m about to do that.

In this recent post, I repeated the concerns that a staff member from one of the NERC Regional Entities had raised to me about whether CIP-013, the upcoming supply chain security management standard, is even enforceable, given that it says contract language is not “in scope” for the standard. This provision means that a NERC entity doesn’t have to show any actual contracts to an auditor in a CIP-013 compliance audit, even though the entity may have asserted that they complied with CIP-013 R2 by getting vendors to include language in their contracts that covers the six specific items required by CIP-013 R1.

As I described in this follow-on post, an auditor from another NERC region responded to the first post by pointing out that “The requirement is essentially that the Registered Entity ask for the elements of the required process(es), not that the vendor agree to them.” In other words, you don’t need to show a signed contract to prove you complied with CIP-013 R2 for a particular vendor. The auditor went on to state that an RFP or even emails stating the “expectations to be placed on the vendor” could constitute evidence of compliance.

I heard a similar argument made by Kevin Perry, the head CIP auditor of SPP, at SPP-RE’s annual CIP workshop in Little Rock this week. He emphasized that CIP-013 R2 (implicitly) requires the entity to try to get the vendor to agree to commit (through contract language or another means like just an email) to doing the six things required in CIP-013 R1, not that they succeed in doing so. They need to provide evidence, like the RFP or emails described above, that shows they indeed tried to obtain that commitment.

The RE staff member wasn’t convinced by this auditor’s argument when I presented it to him, and he sent me a new email. I excerpted two paragraphs from that email in the follow-on post, but I didn’t simply reprint the whole email. Now that I have gone back and read the email, I realize I didn’t do justice to what the staff member said. Here is the full text of his email, although I have broken it into two parts to indicate the two arguments he makes:

Argument One: “The ultimate goal of CIP-013 is to modify the terms of acquisition contracts used by the Responsible Entity:

FERC Order 829 page 59: ‘The new or modified Reliability Standard must address the provision and verification of relevant security concepts in future contracts for industrial control system hardware, software, and computing and networking services associated with bulk electric system operations.’

In keeping contracts out of scope for audits, CIP-013 does not fulfill the underlying purpose of the Standard.”

Argument Two: “There may be some things that can be audited, but the auditors will be handicapped in reviewing evidence. They will not be able to audit that ICS contracts contain provisions which satisfy the security controls of R1, and they will not be able to verify that the entity enforces these controls.

Ultimately, this version of CIP-013 does not fulfill the definition of a Risk-Based Requirement: “[D]efine actions by one or more entities that reduce a stated risk to the reliability of the Bulk Power System and can be measured by evaluating a particular product or outcome resulting from the required actions.” [NERC RoP App 3A Sect 2.4] If the outcome cannot be measured, then the Requirement fails as a Risk-based Requirement.”

Let’s look at the first argument. Although he didn’t comment further, it seems to me the staff member was saying that the fact that FERC specifically used the word “contracts” in the quotation from Order 829 means that contracts with the appropriate language are actually what FERC was aiming for, not just an assurance that the entity tried to get the vendor to commit to what is needed.

To this argument, the auditor replied that the enforceability of CIP-013 has nothing to do with whether it fulfills FERC’s order. FERC has to decide whether NERC has fulfilled their order, and if they think NERC hasn’t done so, they will remand it and repeat (or state more explicitly) that this is all about contracts; at that point, NERC would have to change the language. I agree with the auditor on this point.

However, the staff member’s second argument gives me pause. I think he may be right: A requirement that just requires the entity to give it the ol’ college try, without having to show they actually achieved anything, probably does violate the NERC Rules of Procedure.

Unfortunately, I think this brings us back to a point I made in a post from April. There, I documented how I had come to the realization that just having non-prescriptive, results-based NERC requirements (which NERC refers to as “risk-based”, although I don’t like use of that term in this context) isn’t enough. The whole NERC enforcement environment – including the Rules of Procedure and CMEP – is oriented toward prescriptive requirements. This is why I reluctantly concluded that it will be almost impossible to get any new non-prescriptive requirements approved by the NERC ballot body[i].

In fact, I think it will be very hard to get any new CIP prescriptive requirements approved going forward, either. I think we’ve simply reached the end of the line for any expansion of CIP, absent a FERC order. Look for more on this topic coming soon to a blog near you.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.


[i] And don’t point out to me CIP-013 is non-prescriptive and it just got approved. There was a special reason why CIP-013 suddenly surged from about 9% support on the first ballot to 85% or so on the second (and this same reason probably led to CIP-003-7 being approved last year on the second ballot, after failing badly on the first ballot): There was a FERC deadline that had to be met. Absent such a deadline, I think it will be hard for any new requirements – prescriptive or not – to be approved.