Sunday, April 23, 2017

Cyber Regulation for Natural Gas Pipelines (and for the BES)


Probably the next most critical infrastructure after electric power in North America is natural gas. Like power, there is a nationwide gas transmission system whose loss would affect homes and businesses almost like an electrical outage would. And with the increasing dependence on natural gas for generation, a widespread cyber attack on gas pipelines would be an attack on the electric power system as well.

There is currently a fairly perfunctory cyber security regulation for gas pipelines, promulgated by the TSA (yes, the same people who make you take off your shoes at the airport!), but I know a lot of people think much more is needed. The question then becomes (and I’ve been asked this several times) “Would NERC CIP be a good model for gas pipeline regulations?”

Of course, I think most people in the power industry would just emit a hearty laugh if asked this question, and I’d have to agree with them – it’s hard to imagine inflicting the current CIP compliance regime on any industry except perhaps an enemy’s. But the question then becomes: “What would be a good model for cyber regulation for gas pipelines?

When recently asked this question, I gave it a little thought, then realized the answer was quite simple: Whatever would be the right solution for the electric power industry would be the right solution for any critical infrastructure. Whatever would work for one critical infrastructure should work for all of them.

But what would work for the power industry? If you’ve been reading this blog for a while, you’ve seen this question addressed tangentially in various ways, but never set forth in one place as a specific program. As I’ve mentioned previously, I am now discussing writing a book with a couple of co-authors, which will address this question in detail. But the main reason I haven’t attempted a full frontal assault on this question in my blog is that I haven’t felt I could succinctly articulate an answer.

Until now. I do believe I can now articulate what a critical infrastructure cyber security regulation should look like in six sentences (OK, maybe it’s seven). I will list them here without justification and without detail on how they might be implemented; for that, you’ll have to wait for the book, although I’m sure I’ll sketch out a lot of the details in future posts. Of course, I’d welcome any comments or questions about what I say below – I’ll try to answer using whatever I know at the current time; I’d also like to hear your opinions on whether this sounds like the right approach or not.

In my humble opinion, a workable cyber security compliance regime for any critical infrastructure sector needs to be based on six principles:

  1. The process being protected needs to be clearly defined (the Bulk Electric System, the interstate gas transmission system, a safe water supply for City X, etc).
  2. The compliance regime must be threat-based, meaning there needs to be a list of threats that each entity should address (or else demonstrate why a particular threat doesn’t apply to it).
  3. The list of threats needs to be regularly updated, as new threats emerge and perhaps some old ones become less important.
  4. While no particular steps will be prescribed to mitigate any threat, the entity will need to be able to show that what they have done to mitigate each threat has been effective[i].
  5. Mitigations need to apply to all assets and cyber assets in the entity’s control, although the degree of mitigation required will depend on the risk that misuse or loss of the particular asset or cyber asset poses to the process being protected.
  6. It should be up to the entity to determine how it will prioritize its expenditures (expenditures of money and of time) on these threats, although it will need to document how it determined its prioritization.

Of course, I’m not going to say now that I won’t ever add to or subtract from this list of principles. But I think they’re a good start.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] My eyes were opened at the RF CIP workshop this past week, when Lew Folkerth pointed out that the key to being able to audit non-prescriptive requirements is for the entity to have to demonstrate that the measures they took were effective. I will do a post on this soon.

Friday, April 21, 2017

The News from RF: NERC Readies the Nuclear Option


I attended RF’s spring CIP workshop in Baltimore this week. I have been to a lot of regional CIP meetings over the past 8+ years, but this was the best I’ve seen so far. I kept thinking there would be at least one boring presentation where I could get some email done, but it never happened all day! I will have a number of posts on things I learned during that day (probably not consecutively). This is the first, and perhaps the most important.

One of the presentations was by Cory Sellers, the Chair of the Supply Chain Security Standards Drafting Team. Cory gave a very good rundown on a) the objections that were raised to the first draft of CIP-013, which was roundly voted down by the NERC ballot body; and b) the changes that the drafting team is working on (with emphasis on the present progressive tense. He wasn’t kidding when he said changes were being made as he spoke – in fact, new versions were sent out to the Plus List the meeting was still in session), which should be posted for a new ballot in a few weeks.

It was quite interesting to see all the changes that are being made – to CIP-013 as well as to CIP-003, CIP-005 and CIP-010. I can’t summarize them here, but I will say they are quite ambitious and have a lot of moving parts. But my concern wasn’t so much the substance of the new changes, but the timeline for approval.

At the end of his presentation (which included taking a number of questions from the room – Corey certainly wasn’t trying to hide anything!), I asked him the question I was most concerned about. Here is the situation that prompted my question:

  1. Obviously, the first draft of CIP-013 failed miserably, receiving only about 10% positive votes.
  2. The next posting will need 68% to pass (I wasn’t sure about the exact number, but Corey readily supplied it to me). In the best of circumstances, it would be very difficult to go from 10% to 68% in one ballot. And frankly, with the large number of changes, and especially the fact that changes are being made to three existing standards as well as to CIP-013 (plus the fact that two new terms are used – “vendor” and “machine-to-machine remote access” - for which there will be no vote on a definition), it seems especially unlikely that the next ballot will pass. This means it will very likely take at least one additional ballot before it passes (both CIP v5 and v6 took at least three ballots to pass, and in both cases I believe there was a fourth ballot to clean up the wording).
  3. At the same time, the deadline remains September to a) have the revised standards approved by the ballot body; b) have the NERC Board of Trustees approve them; and finally c) file them with FERC. And since the BoT’s last meeting before September is in mid-August, the changes need to be approved by the ballot body before then.
  4. If you do the math, it’s quickly evident there’s no chance to have a third ballot, should the second one fail. So it appears very likely NERC will miss FERC’s deadline.
  5. But despite all of this, Corey said that NERC had assured him it would not miss the deadline, and FERC would have their supply chain security standard (and related changes in other standards) in September.

Given this, the question would naturally be “Why do you seem to believe that NERC can make the deadline, given that it’s very likely this won’t have been approved by the NERC ballot body by then?”  However, I actually asked Corey a different question: Did he know about Section 321 of the NERC Rules of Procedure? He readily admitted he knew that section quite well, and had been discussing it a lot lately; I was not surprised to hear that, because I heard last week (from a very reliable source), that NERC was seriously discussing invoking Section 321 for the first time – for CIP-013 and the associated changes. Corey said he had deliberately not brought up Section 321 in his presentation, but he was glad I had.

What is this mysterious Section 321? You can read it yourself, but it essentially allows the NERC BoT, in the event that the normal balloting process has not yet produced a draft standard(s) that, in the Board’s opinion, will satisfy an order from a regulatory body (which means FERC, here in the US), to have the Standards Committee draft one that will satisfy the order.

The wording of Section 321 is much more oriented to the case where a standard has been approved by the ballot body, yet is inadequate for some reason; in this case, we’re talking about a deadline being missed. However, I have no doubt – and Corey does not seem to either – that 321 could be made to apply to this case. I doubt the BoT will have a big problem with the wording of the new standard and the changes to existing standards; the problem is that there isn’t enough time to go through the normal approval process before FERC’s deadline.

So the bottom line is: This next ballot will very likely be the last one for the supply chain standard. If it passes, the current wording (with some legal clean-up) will be approved and submitted to the Board in August. If the ballot fails, then it will be up to the BoT and the Standards Committee to determine what the wording should be – and whatever they decide on will still be submitted to the Board in August. Of course, since these committees are both made up of industry members, it’s not likely that what they ultimately approve will be hugely different from the second draft. In fact, I imagine they might also consider the comments that are made in the second round of balloting, and use those to improve on that draft. So I’m not expecting the final version of CIP-013 to be some Frankenstein freak that nobody will like.

But this isn’t the end of the story. I learned from one of the participants at the meeting that the next ballot is likely to pass after all, given the very strong support being provided by a major industry group. If so, the Section 321 “nuclear option” might be put back on the shelf for another day. But whether or not 321 is invoked, it’s pretty clear to me that the normal balloting process is being short-circuited – in the one case by 321 being invoked, in the other by the substantial pressure this industry group is exerting on their members to vote yes, in spite of lingering misgivings they may have. In other words, it won’t be a completely free-will approval.

The real problem here is the fact that FERC only gave NERC a year to develop and approve the new standard. This was definitely not enough time, as was eloquently expressed by Commissioner (and now acting Chairman) LaFleur in her power dissenting opinion – and by me in my post on Order 829 (which includes a summary of Commissioner LaFleur’s argument).

I suggested at the time – both in my blog and at I believe two NERC CIPC meetings – that NERC should petition FERC to get the deadline extended, to no avail. I suggested this to Cory as well, but he assured me that NERC wasn’t going to do that (it’s not clear if FERC could approve the deadline extension at this point, since they don’t have a quorum. But they do have some powers to take action, and given that Cheryl LaFleur is now the acting Chairman, I would think she would be inclined to grant this if at all possible).

But it seems the decision has been made not to even ask for an extension. This is all quite unfortunate, of course. Does anyone doubt that another year, or even half a year, of debate and modification of CIP-013 would result in a much better standard? Or to word this differently, is there anybody who seriously believes that the SDT has such amazing listening and writing skills that they will be able to come up with exactly what is needed to satisfy everybody in their upcoming draft, and most of the 90% who voted no on the first draft will now be happy as clams with every word they’re voting on in the second draft? Please raise your hand if so….Yes, I didn’t think there would be anybody.

In any case, we’ll get what we’ll get.  It will certainly be decent, but it’s unfortunate it can’t be really good. I see this as the symptom of a bigger problem – the canary in the coal mine that just died, in the process revealing a serious condition that threatens the miners themselves. More on this in another post coming soon to a blog near you.



The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

Wednesday, April 19, 2017

Lew Folkerth’s “Zones of Authority”


For what seems the 50th time, I’m starting a post by referring to an article by Lew Folkerth of Reliability First, found in the March-April issue  of RF’s newsletter (as always, under the heading “The Lighthouse”). As has been the case on every previous occasion, Lew has put his finger on an important issue in the current NERC CIP compliance regime.

The article[i] specifically addresses the question whether the use of virtualization is permitted by the current CIP standards, and if so what is permitted and how it should be implemented in a compliant manner. Lew answers the first question by saying virtualization is neither permitted nor prohibited. This is of course not earth-shattering news, but Lew quickly moves on to something that is quite interesting: the concept (which he invented, at least as far as CIP goes) of “zones of authority”.

Lew brings this concept up for the following reason: Even though there are no actual CIP requirements that apply to virtualized environments, this hasn’t stopped NERC and the regions from coming up with a few “guiding principles” for virtualization; probably the most important of these is that there should not be “mixed-trust” virtualized environments. Briefly, this implies that a host server for multiple VMs shouldn’t host both ESP and non-ESP devices, and a switch with multiple VLANs shouldn’t host both ESP and non-ESP networks.

Lew says (or implies) that auditors are having differences of opinion with entities on the security of mixed-trust switches. It seems these entities have switches that implement both ESP and non-ESP VLANs. When the auditors tell them this isn’t secure, the entities point out that the non-ESP VLANs have just as good security as the ESPs do[ii]. So why aren’t they safe?

Lew doesn’t dispute that it is very likely that, from a pure security point of view, these entities are right: mixed-trust switches are just as safe as switches that only implement ESPs. But this is where zones of authority come in. Lew points out that the auditor only can look at CIP assets, like ESPs. Anything else, such as a VLAN that isn’t an ESP, is completely out of his or her purview; the auditor just has to assume these are completely insecure, and thus shouldn’t be found on the same switch as ESP VLANs.

Of course, Lew is completely right about this: The NERC CIP standards only take account of BES Cyber Systems and their associated ESPs, PCAs, EACMS, etc. Anything that isn’t one of these is completely out of scope. This is a double-edged sword, of course. On the one hand, the auditors can’t look at what isn’t in an ESP, which limits the kinds of evidence an entity can show them. On the other hand, the entity has no compliance obligations for something that isn’t in an ESP or that protects ESP devices, like an EACMS or PACS. My guess is most entities, if told they could force CIP auditors to consider security controls they have implemented for non-ESP networks, but only in return for having at least some of the cyber assets on those non-ESP networks fall into scope for CIP, would say “Thanks but no thanks. We’ll leave things as they are.”

And this is exactly the issue: It would make a lot of sense from a security point of view to have all cyber assets and networks that are in the entity’s control be in scope for CIP[iii]. Just look at the Ukraine attacks, which started with phishing attacks on the IT networks. It was only after completely taking over those networks that the attackers were able to find a way into the relays that were their ultimate targets, even though they were on the OT network.[iv] But the idea of having all or even just a few of their IT cyber assets suddenly become BES Cyber Systems would cause a lot of CIP professionals to bash their head against the nearest hard object, or seriously consider making a career change to fast food service.

So are we stuck here? While it would certainly be a good idea to have all networks under the NERC entity’s control be in scope for CIP, I would agree with my hypothetical CIP professional that the burden of making these networks ESPs, and making even a few of their cyber assets be BES Cyber Systems, would be too great to justify the benefit: the enhanced ability to demonstrate their security posture to an auditor.

But we’re not stuck here. Were we to rewrite the CIP requirements so that they weren’t prescriptive (i.e., so they were objectives-based), and enclose them in an appropriate “wrapper” so that the entity was simply required to address a certain set of threats (and this list of threats were regularly updated, outside of the standards development process), I think we could safely expand CIP to include all networks under an entity’s control. But there would still have to be an appropriate classification of cyber assets in scope, depending on how “near” they are to the BES. Those that are directly involved in control of the BES (as BCS are in the current standards) would have to have stronger controls than those whose involvement is only indirect (such as the IT network assets at the Ukrainian utilities that were attacked).

Of course, I’ve made this point about the need to rewrite CIP previously. And I will continue to make it, including in my next two posts.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] I recommend you read the article yourself, of course, even though I will summarize parts of it here. Don’t worry – it’s just a page. If I had written the article, it would have at least been the length of one chapter in a book, if not an entire book.

[ii] I’m speculating here. I’m not sure if auditors are actually running into this problem already, or if Lew just anticipates that they will. Either way, Lew’s discussion provides an excellent window into a real problem.

[iii] I recently was introduced to another CIP blogger, Larry G, who publishes a blog called NERD CIP; while I haven’t had time to spend a lot of quality time with his posts yet, they look quite insightful. In particular, he recently wrote a post that starts off with a consideration of the same Lew Folkerth column that this post started with (although he has a different take on it). In that post, Larry points to the PCI definition of Trusted Network, which reads “Network of an organization that is within the organization’s ability to control or manage.” Since all Trusted Networks are potentially in scope for PCI, they will all need to be considered – either because they themselves hold payment card data or because their compromise could lead to compromise of the networks that do hold that data. In fact, I will venture a guess that CIP is the only set of cyber security standards that limits itself to a subset of the networks actually controlled by the entity.

[iv] And please don’t tell me that it was only because the three Ukrainian utilities had poor separation between their IT and OT networks (lack of two-factor authentication, lack of intermediate server, etc) that the latter were penetrated. When attackers have complete control of the IT network for months, they will one way or the other be able to find their way into the OT network, no matter how good the security may be. How about this: One day all of the relay engineers get an email from their boss (when he is on vacation) that due to some need for emergency maintenance they need to open all of the circuit breakers at a certain time that day. Some might not follow this instruction; others might do it. In any case, there would be a coordinated outage. If this happened at a number of utilities, it could cause a serious BES problem.

Monday, April 17, 2017

More on CIP and the Cloud


In this recent post, I came to the conclusion – led there by an auditor – that NERC entities can entrust BES Cyber System Information (for Medium and High impact BES Cyber Systems) to cloud providers, as long as they comply with four requirement parts: CIP-011 R1.2, CIP-004 R4.1.3, CIP-004 R4.4 and CIP-004 R5.3.

However, it seems I may have missed two requirement parts. At the WECC CIP User Group meeting in Denver recently, auditor Morgan King did a very good presentation on CIP and virtualization (for the slides, go to this page and click on the presentation with his name on it). While the virtualization discussion was very good, he also brought up the cloud (since the two technologies go hand in hand). On slides 23 and 24, he lists six requirement parts that apply to BCSI in the cloud. Besides the four I just listed, he also includes CIP-004 R2.1.5 and CIP-011 R2.1. So I recommend you make sure you’re compliant with all six of these.

Since all six of these requirement parts will require that the cloud provider have certain policies and procedures in place – and that they maintain the same level of documentation that is required of the entity itself - I know some readers will object that this places too big a burden on the entity itself, that they will in effect have to audit the cloud provider. If you have this objection, I recommend you look at this post, which points out that a third party audit like SOC 2 could well be considered sufficient evidence of compliance.

And I also recommend this post, which points out that encrypting the data in the cloud is a good mitigation measure for compliance with some of the six requirement parts, but it doesn’t remove the obligation to comply with those parts. You will still have to provide evidence that you and your cloud provider are complying with each part.

Another thing you want to keep in mind is that your Information Protection Plan from CIP-011 R1.2 requires measures to address data in transit, not just at rest. This means you might need to encrypt the data before it goes to the cloud provider, depending on how you send it.



The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

Wednesday, April 12, 2017

A Great Supply Chain Security Guide


Early this year, I was invited to speak at the EastWest Institute’s Global Cybersecurity Cooperation Summit in Berkeley, CA in March – specifically, at the meeting of the EWI’s Breakthrough Group on Increasing the Global Availability and Use of Secure ICT[i] Products and Services. I had certainly heard of the EastWest Institute previously, but only in the context of their weighing in on ponderous global issues like war and peace. I didn’t realize that cybersecurity would rank as a concern worthy of their attention, but this is obviously the case, and has been for many years.

I believe the reason I was invited to speak at this meeting was the various posts I have written on FERC Order 829  and the subsequent development of CIP-013, the new Supply Chain Security standard (of course, that standard is very much still in the development process. Moreover, three of the existing CIP standards are now being modified to include requirements from the first draft of CIP-013, that commenters on the first draft felt would be better included in those standards).

Last year the Breakthrough Group published an ICT Buyers Security Guide. I read the guide and discussed it with the members of the group before the meeting. I am quite impressed with this document, for two important reasons. First, it is concise (the main discussion takes up about 22 pages). As such, it contrasts vividly with NIST 800-161, NIST’s supply-chain security guide. Like most NIST publications, 800-161 tries to exhaustively (and exhaustingly!) cover every possible aspect of its subject, supply chain security. Unfortunately, the result is that non-governmental organizations, who aren’t required to follow it, must put in a considerable amount of effort just to decide which controls they should focus on (and also, for each control, how they should address it).

Second, the guide is very practical. Perhaps because it is concise, it is focused on providing guidelines that organizations can immediately put into practice. These guidelines are mostly in the form of 25 questions that can be asked of suppliers, like “Are third-party inputs evaluated for security prior to selection and tracked/validated upon entering the supply chain?” and “How are products and services continually tested for security vulnerabilities?”[ii]

If you’re wondering how this Guide might fit in with CIP-013, I would think some or all of these questions might be incorporated into your entity’s process for compliance with CIP-013 R1.1.1 (at least, as that requirement part stood in the first draft, posted in January).

So I recommend that you read this document, and consider how it might help your organization achieve two goals: a) Improve your supply chain security posture; and b) Comply with CIP-013.

I also want to point out that the EWI supply chain security group is now working on a major revision to the Guide. If you might be interested in participating in that process (which includes phone conferences and in-person meetings), let me know and I’ll put you in touch with the leader of the group.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.


[i] ICT stands for Information and Communications Technology.

[ii] Of course, it’s up to the organization to determine which questions to ask of which suppliers. One big difference between this Guide and both CIP-013 and NIST 800-161 is that this guide focuses entirely on what suppliers do and don’t do. It doesn’t address other areas that are under the entity’s control, such as secure deployment and vendor remote access control. Of course, some might argue that these topics aren’t really part of supply chain security. And they probably wouldn’t be in CIP-013 either, except for the fact that FERC ordered they be included.

Wednesday, April 5, 2017

The Prisoner’s Dilemma


I remember reading a few news stories about a long-time prisoner who is released and finds that life on the outside is much harder for him than it was in prison. I don’t think too many of these people actually ask to be re-admitted to prison, but it certainly stands to reason that someone who has lived almost his whole adult life there might view his complete lack of any lifestyle choice at all while behind bars to be in many ways preferable to the need to make a myriad of choices, which all of us who live outside of prison have to do every day.

This is perhaps a little bit of an overdrawn analogy, but I have more than once thought of this “prisoner’s dilemma[i]”as akin to that of NERC compliance professionals who object to non-prescriptive, results-based standards for cyber security (such as the CIP-013 standard for supply chain security) on the grounds that they will not work in practice, compared with good ol’-fashioned prescriptive standards.

Why do some NERC CIP compliance professionals feel that non-prescriptive requirements won’t work? If you haven’t been in the trenches of CIP compliance for the past say three or four years at least, you may not understand why anyone would prefer prescriptive requirements, at least when it comes to CIP. For example, who would prefer CIP-007 R2.2 - which mandates that every 35 days the entity must, for every device in its ESPs, check with the patch source to determine whether a new patch has been released - to CIP-007 R3.1, which simply directs the entity to “Deploy method(s) to deter, detect, or prevent malicious code”?

Well, it seems a lot of people do in effect prefer R2.2. The fact is that many NERC compliance professionals – in particular ones who have had to deal with CIP for a number of years – shrink back in horror from the idea that all of the CIP requirements should be like CIP-007 R3.1, not R2.2. I was reminded of this very forcefully when I recently attended a meeting of the team drafting CIP-013, the new supply chain security standard, in San Antonio.

The team met just after their first draft of that standard was soundly rejected by the NERC ballot body. All of the requirements in that draft are non-prescriptive[ii], and some commenters indicated they were very worried about giving CIP auditors too much “discretion” in how they conduct their audits. And there certainly have been a lot of stories – more during the CIP v1-v3 days than currently – of auditors who have made their own interpretations of some of the CIP requirements, and identified Potential Violations in cases where other auditors would have found compliance.

In other words, these people seem to be saying “We need to keep these auditors on a very tight leash, so they don’t have any discretion in how they audit the CIP requirements. If all the NERC CIP requirements are made non-prescriptive, the auditors will no longer be under anyone’s control, and will be able to issue PVs for anything they perceive to be a bad cyber security practice, whether or not it’s allowed in the requirement they’re auditing.”

I certainly do have a lot of sympathy for these people, since I know that at least some of their horror stories are real and have made their lives miserable, at least for a time. However, I think these people are basing their arguments on the wrong premises. In fact, auditors are now virtually required to use discretion; they no longer have an option not to do so. Let me elaborate.

As I discussed in a post last year (not too coincidentally, one which also referenced discussions about CIP-013), many if not most of the prescriptive CIP requirements include particular details – like “within 35 days” in CIP-007 R2 – that aren’t dictated by any principles of cyber security or electrical engineering, but which simply are there because the drafters thought they needed some number - in order to have a requirement that could be “unambiguously” audited.  So there’s a strange logic here that goes something like

  1. We need prescriptive requirements so that auditors can’t use any discretion.
  2. Prescriptive requirements can only be audited if they have precise criteria, preferably numerical.
  3. Therefore, the requirements should include specific numbers (or other goals), even if there is no compelling reason why any particular number should be chosen.

To word this another way, a lot of the details in prescriptive requirements are there simply because the requirements are prescriptive, not because they are dictated by say cyber security practices or the laws of physics.[iii] Lots of money and effort are being expended on meeting these arbitrary deadlines and numerical targets, when entities could deploy their limited cyber security budgets much more cost-effectively if they were given a goal to achieve and then allowed to choose – and defend to an auditor – whatever means of achieving that goal makes the most sense in their environment. And this means that NERC CIP compliance professionals who won’t support non-prescriptive requirements are in effect saying they’d much prefer to put up with the additional headache and expense of meeting these arbitrary targets, even when they contribute very little toward increased cyber security, rather than take the chance that an auditor might decide to judge a non-prescriptive requirement in an arbitrary way.

However, these people are missing a very important point: The auditors are already imposing their own (or their region’s) judgment regarding the meaning of particular CIP requirements, because of the many gaps, ambiguities and outright contradictions in the requirements. And this trend has simply increased with CIP versions 5 and 6. For example, in order to determine whether an electronic device is a Cyber Asset, the entity needs to know what the word “Programmable” means in the Cyber Asset definition; that word is undefined and is still heavily debated. And to determine whether a Cyber Asset is a BES Cyber Asset, the entity needs to understand what “impact the BES” means in the BCA definition – this phrase is also undefined[iv]. So the only way for the auditor to determine whether the entity is compliant with CIP-002 R1 (the requirement in question here) is to exercise judgment on whether the entity’s own interpretation of these terms was reasonable or not (although if they’re lucky their NERC Region will have provided them some guidance on this and other similar questions. But the regions don’t usually publicize their guidance, and different regions sometimes provide different guidance, if they provide any at all).

So which would you rather have: a prescriptive requirement like CIP-002 R1 that seems to be very precise but in fact requires a lot of under-the-table judgment, or a non-prescriptive requirement like CIP-007 R3, which states that the entity must “Deploy method(s) to deter, detect, or prevent malicious code”? Granted, the latter leaves much room for the auditor to exercise their own judgment, but this is judgment about cyber security issues – in this case, whether the methods deployed by the entity to prevent malicious code are appropriate, given the cyber assets being protected. And if, in the auditor’s judgment, the entity hasn’t deployed the right methods, they won’t simply issue a PV (unless, of course, the entity hasn’t deployed any method at all, or has deployed a method that could never possibly meet the objective of the requirement). Rather, the auditor will most likely issue an Area of Concern, stating that – while the entity did technically meet the requirement since they did deploy a “method” to prevent malicious code – the entity should strongly consider another method which, in the auditor’s opinion, would be more effective.

But consider the auditor’s judgment in auditing CIP-002 R1, discussed earlier. In interpreting what “Programmable” and “affect the BES” mean, the auditor isn’t applying his or her cyber security knowledge or experience, but simply guessing what the words mean, based on the context of the definition, how it is used, etc. This is judgment about the semantics of legal terms. Yet I don’t think a legal semantics background is required of CIP auditors, while a cyber security background certainly is required. So in which area would you prefer to have the auditor exercise judgment: semantics or cyber security? In auditing the prescriptive CIP requirements, the auditor is not supposed to exercise cyber security judgment, but he or she is in fact often required to exercise semantic judgment. In auditing non-prescriptive requirements like CIP-007 R3, the auditor is only required to exercise cyber security judgment – and they all should be able to do that by now.

Of course, you may point out that some NERC CIP auditors aren’t the world’s best cyber security experts. Or they may know cyber but they’re weak on industry knowledge. Or both. I’m sure this is the case, but these problems can be addressed through measures like longer on-the-job training periods and mentorship. But requiring cyber security auditors to exercise judgment about cyber security matters is a much better bet than asking them to exercise judgment on ambiguous (or in some cases missing) definitions and requirements.

This is one reason why I think the non-prescriptive route is the only good one for NERC from now on, regarding development of new CIP standards and requirements. And NERC (with prodding from FERC) seems to have made this decision already, since no new prescriptive NERC standard or requirement has been developed in the past three years (CIP-014, CIP-013 and CIP-003 R2 Section 3 are all non-prescriptive. I believe these are the only ones that have been developed new - although CIP-003 R2 Section 3 was a rewrite of a prescriptive requirement part).

But just making sure all new requirements are non-prescriptive isn’t enough. Some of the existing CIP v5 requirements are highly prescriptive. CIP-007 R2 is my poster child for a prescriptive requirement, but CIP-010 R1 isn’t too far behind. A lot of the other requirements are prescriptive as well. The fact that they are prescriptive imposes a big burden on NERC entities (one entity told me that literally half of all the compliance documentation they produce in their control centers is for CIP-007 R2 alone). More importantly, I believe there can be no further meaningful extension of the existing CIP standards – to areas like virtualization or the cloud – until all of the existing requirements are made non-prescriptive. More on this in my next post.[v]


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] And in using this phrase, I’m not in any way referring to the famous “game” called the Prisoner’s Dilemma, which is much discussed in game theory.

[ii] “Results-based” would probably be a better phrase than “non-prescriptive”, since it emphasizes what cyber security requirements should do (require the entity to achieve a certain goal) rather than what they shouldn’t do (prescribe particular steps to achieve that goal). However, I hesitate to use the former phrase, since NERC uses it to describe the current CIP requirements, even though some of them (such as CIP-007 R2 and CIP-010 R1) are very far from being truly results-based.

[iii] Please note that I think almost all of the non-CIP NERC standards (i.e. the Operations and Planning, aka “693”, standards) probably do need to be prescriptive. For most of those standards, the specific goals and numerical targets are dictated in some way by the laws of physics. For some of those standards, a failure to do a certain thing at a certain time could very well lead to a cascading outage, as was the case in the 2003 Northeast blackout. But cyber security is statistical. Missing the availability of a security patch for one system for one month is not inevitably going to cause any problem, although not patching any systems for say a year or two will very possibly lead to a BES event. The entity needs to determine what makes the most sense in their environment and with their budget (since I don't know any NERC entity that has been given a blank check to cover anything they feel like spending on cyber security. They need to prioritize their spending so it yields the greatest possible security per dollar spent – as discussed in this post). Non-prescriptive requirements are the only way to do this.

[iv] Both of these issues are on the table for the CIP Revisions drafting team to address. By the time new definitions are drafted, balloted and commented on, redrafted, re-balloted and commented on, re-re-drafted and re-re-balloted and commented on, and then approved (or not) by FERC (assuming they have a quorum sometime in the not-to-distant future), it will be probably 3-4 years from now. In the meantime, auditors and entities need to muddle through, as they’ve been doing all along.

[v] I cringe a little when I say what will be in my “next post”, since more often than not some other topic comes up that seems more compelling, and the “next post” actually appears five or six posts later. In any case, a post on this topic is coming, whether it’s the next one or not.

Saturday, April 1, 2017

US to Deploy NERC CIP to Mexico!


In response to what was described as a “flood of cheap, poor-quality Mexican electrons crossing our border”, the US government has decided to “level the playing field” by requiring Mexican entities that generate or transmit electric power to comply with the NERC CIP Reliability Standards.

According to a White House spokesman, “The US electric power industry has suffered too long from unfair competition from Mexico. Millions and millions of workers have already lost their jobs because Mexican electrons are manufactured[i] by low-cost workers and then dumped over our border. We believe the best solution to this problem is to impose the same regulatory costs on the Mexican power industry that our industry faces.

“While there are a lot of regulations that apply in the US but not in Mexico, probably the fastest-growing is the NERC CIP standards for cyber security. One semi-reliable blogger has stated that there are at least a few large utilities – and many more smaller ones - that have spent in excess of 25 times as much on CIP version 5 compliance as they did on the previous version 3 (and don’t ask us what happened to version 4. I assume it was dropped because of unfair Mexican competition as well). Moreover, the amount spent will continue to grow as the scope of CIP is expanded. Enforcing NERC CIP in Mexico will bring their industry’s costs much closer to ours, so our wonderful American workers can once again compete.”

To learn more about this, I talked with Professor Sebastian Tombs of the University of Southern North Dakota at Velma, where he teaches part-time in the Extension Division. Professor Tombs said “I’ve been waiting for this to happen for a long time. For at least the last few years, it’s been clear to a number of us in the academic community that NERC CIP could be weaponized. It has sucked up a lot of resources at US and Canadian utilities, and it will clearly have the same effect in Mexico or any other country in which it might be deployed. I do think this is a drastic step to take, but I guess there are worse ones, such as invasion or use of nuclear weapons.”

I was also very fortunate to be granted a short interview with a senior White House official, who asked to remain anonymous. He said, “Don’t get me wrong. Mexicans are wonderful people. We’re not doing this because we don’t like Mexicans. But we have to protect our workers –they’re the world’s best, and they can make electrons like nobody else can. I would have preferred there were some sort of ‘extreme vetting’ procedure we could use for these Mexican electrons, so that only those that were the highest quality would be allowed into the US. But my people say – and believe me, I have the best minds working for me – that this would be very impractical. There are just too many electrons coming in.

“So we had to come up with another measure, and someone brought up NERC CIP. I of course had never heard of it, but when I learned about the huge burdens it’s placing on US utilities and independent generators, I thought it was only fair that the Mexicans should have to follow it, too. I realize it will cause a lot of suffering there, just as it has here. It would be nice if it could be reformed so that it were more cost-effective, as I’ve heard some blogger is advocating. Maybe I’ll get to NERC CIP reform once I’m finished with health care and tax reform, although I’m told that those two will seem like a piece of cake compared to CIP reform. But in the meantime, we need to make sure that our competitors don’t have an unfair advantage.”

At last report, the Mexican government was considering building a wall along the border to keep out the NERC auditors.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.


[i] Note from Tom: I would like to correct the White House and point out that electrons can be neither manufactured nor destroyed. I tried to call them, but I was unable to find a science advisor to talk to. I was told that my message sounded like "fake science".

Wednesday, March 29, 2017

The News from WECC, Part I: When a Patch isn’t called a Patch


I’m attending the semi-annual WECC CIPUG (CIP Users Group) meeting in Denver this week – just me and 350 of my closest friends (and attendance is down from some other CIPUG meetings, which have at least one reached 500 attendees). As has been the case previously at these meetings, I have come away with some interesting observations, so consider yourself warned. This is part I of what may be several posts.

In today’s meeting, Eric Weston, one of the WECC CIP auditors, did a good presentation about various pitfalls in CIP-007. I agreed with everything he said, except what he said about CIP-007 R2, Patch Management. The point of this part of his discussion (and you can find his slides, along with all of the other presentations, by going to this page and looking for the slides with his name on them) was that there can be some security patches that aren’t actually called that – they’re referred to as firmware upgrades, driver upgrades, etc.

OK, so far so good – I don’t have any problem with that.[i] However, what I was confused about was that he quoted from the CIP-007 Guidelines and Technical Basis (slide 6) where it says “The intent (my italics) of Requirement R2 is to require entities to know, track, and mitigate the known software vulnerabilities associated with their BES Cyber Assets.” Then he followed that up on slide 8 with the statement that

“Entities should include in their patch management program
• Hardware Drivers
• Device Firmware (that can be updated by end user)
• OS updates for devices that provide revisions to the OS to address vulnerabilities (Cisco IOS, Ruggedcom ROS, etc.)”

This set off a red flag in me for the following reason: CIP-007 R2 itself doesn’t state an objective (or intent, if you will) to be attained, even though the guidance (which isn’t part of the requirement) does. And there is a good reason why R2 doesn’t state an objective: It is a prescriptive requirement. In fact, as the two of you who have been reading my posts closely for the past few months probably realize, CIP-007 R2 is my poster child for a prescriptive requirement (you should really hiss at this point, since the villain has just made his appearance onstage).

Prescriptive requirements don’t state an objective and let you figure out how to attain it, like non-prescriptive (or objective-based) requirements do. Instead, a prescriptive requirement sets out a specified set of steps that must be taken by all NERC entities that are subject to the requirement. While the steps are obviously meant to lead to attaining a particular objective (in this case, mitigating known software vulnerabilities), compliance with the requirement doesn’t entail attaining the objective. Rather, to comply with the requirement, the entity needs to follow the steps, period. So to speak truthfully, the objective of a prescriptive requirement is just to follow the steps listed in the requirement. If you haven’t done that, you haven’t complied with the requirement, even though you may have obtained the final objective through some other means.

To illustrate what I just said, suppose you had decided that you could mitigate software vulnerabilities in your BCS, PACS, etc. simply by scanning them every month and letting the scanner tell you what vulnerabilities it found; then you would find the patches that addressed those vulnerabilities. I'll stipulate for now that this is as good a way - or perhaps even better - to attain the objective of software vulnerability mitigation as the set of steps prescribed in CIP-007 R2. But don't try to tell this to your auditor. You haven't complied with R2 because the objective of R2 is simply to follow the steps listed in the different requirement parts, nothing more, nothing less.

Contrast this with CIP-007 R3, Malicious Code Prevention. This is a truly non-prescriptive, objective-based requirement. Part 3.1 simply reads “Deploy method(s) to deter, detect, or prevent malicious code.” This is a true objective, and it clearly leaves the choice of methods up to the entity. Part 3.2 reads “Mitigate the threat of detected malicious code.” Again, this is the objective; the methods are up to you.

However, the main reason a red flag went up in me is that I was worried Eric was implying that, because the “objective” of R2 was software vulnerability mitigation, and because there are indisputably some software updates released by vendors that mitigate vulnerabilities but aren’t specifically called security patches, the entity is required to search through every update from its vendors to identify those that are really security patches in disguise.

If R2 were an objectives-based requirement like R3 is, you could certainly argue that one way to achieve that objective might be to make a point of applying not only vendor-identified security patches, but upgrades that contained security patches but weren’t called that. But R2 isn’t objectives-based. It specifically says that entities must look for every “cyber security patch” available from a vendor, and they must do this every 35 days. It would be a big stretch to say that this includes every upgrade that mitigates software vulnerabilities, whether or not it is called a security patch.

I brought this question up to Eric at the end of his presentation. He assured me that he wasn’t saying that entities were required to find all security patches, whether or not they’re called that; rather he only said that finding non-security patches (when possible) should be part of the entity’s patch management program. He further said that a Technical non-Compliance (TNC) finding might be assessed if an entity hadn’t done this.

This was the first time I’d heard of a TNC; evidently it’s a new way (handed down from NERC) of labeling what might have previously been called an “area of concern” – an instance where an entity has not followed good security practice, but which is not itself a violation of a requirement. I'm not sure I think "Technical non-Compliance" is a good way to describe this case (indeed, I believe the entity that only looked for security patches that were identified as such would be in complete compliance with CIP-007 R2. I would interpret "security patches" to mean something like "software snippets released by a vendor and identified as such, that are intended to mitigate particular software vulnerabilities"), but I'll save that for another post.

On hearing this, I lowered my mental red flag and sat down happy.

As it turns out, yesterday I had started working on a more comprehensive post on what I find to be an unfortunate (but understandable) suspicion that many NERC entities have of non-prescriptive requirements. The exchange I’ve just described fits in very well with that post, as you will discover – if you’re not already sick of the subject, and of me for constantly harping on it – when I put out that post, hopefully next week.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.


[i] You do need to keep in mind the difference between an actual security patch and a functionality upgrade that provides better security capabilities. An example of the latter would be an upgrade that extended the acceptable password length on a device. While this would of course be a great improvement for securing the device, it wouldn’t be a security patch, which is intended to mitigate a software vulnerability. Therefore, functionality upgrades aren't in scope for CIP-007 R2. For a good discussion of this topic, see this post.

Thursday, March 23, 2017

Wrapping up the Cloud


Before I started the series of four posts on the cloud and NERC CIP in February, I had assumed that the question of how NERC entities (with Medium or High impact assets) could safely navigate CIP compliance while utilizing the cloud within their ESPs (given that CIP is completely silent on the cloud) was like just about every CIP problem I’ve written about: a story with no conclusion.[i] In other words, almost every CIP problem I’ve looked at has come down to saying this: “The only answer to this question will be specific to your entity, and it will have to come from your Regional Entity. Even the answer you get from your RE will vary according to the person answering, as well as when they answered. Have a nice day.”

But I was pleasantly surprised as I wrote the first post (linked above), when an auditor confirmed to me that entities could safely utilize the cloud and remain CIP compliant. And this wasn’t some opinion of his, but was based on the actual wording of requirements in CIP versions 5 and 6. In other words, it does actually seem like the question of whether NERC entities can utilize the cloud and how they can utilize it – while remaining compliant with CIP – is subject to a definite answer.

In this post, I’d like to summarize what I said in the previous four posts in this series, as well as address a few points I hadn’t discussed, including whether and how Lows can use the cloud. Here is what I think are the “known knowns” (to quote the great scholar and philologist Donald Rumsfeld) about this subject:

  1. There are two fundamental questions about the cloud with regard to CIP: First, can an entity remain CIP compliant if some of their BES Cyber System Information (BCSI) is stored in the cloud? Second, what if an entity actually outsources some of their BES Cyber Systems to the cloud (e.g. outsourced SCADA)?
  2. Since the situation is very different for owners of High and Medium impact BCS vs Lows, I will separate the two discussions. First is the Highs and Mediums.
  3. For Highs and Mediums, the answer to the first fundamental question is yes – they can store BCSI in the cloud. Of course, they have to follow the CIP requirements that apply. In the first post, I listed four requirement parts that an auditor pointed out were applicable to the cloud. NERC entities that follow these four requirement parts should be found compliant.
  4. In the second post, I discussed encryption of BCSI data in the cloud. The point of that post is that yes, encryption is a good control to use in complying with the four requirement parts. However, encrypted data is still BCSI. The entity still has to document how they have complied with the four parts.
  5. In the third post, I pointed out that the need to provide evidence of compliance with the four requirement parts doesn’t go away just because the BCSI is in the cloud. In fact, it is possible that some entities who utilize the cloud will be found non-compliant just because they were unable to obtain adequate documentation from their cloud provider. In other words, you need to bring up CIP compliance before you sign on the dotted line with your provider; if you wait until afterwards, it may be too late.
  6. In the fourth post, I wrote that there is one way for an owner of Medium or High impact assets to store information about their BCS in the cloud but not have to take any special steps to comply with the four requirement parts as a result: that is to read the BCSI definition carefully and take steps to make sure what is stored in the cloud doesn’t meet that definition.
  7. Regarding the second fundamental question, whether High or Medium impact BCS themselves (not just information about them) can be stored in the cloud, the answer is: Not if you want to have any friends at NERC or your Regional Entity. Regardless of whether this is compliant or not, I know that both NERC and FERC are very much against “real-time operations in the cloud” – as I pointed out in this post in January (however, on the question of whether this is compliant, see the last three paragraphs below).

Now for three more topics I haven’t addressed yet:

First, what about Lows? Regarding the question of storing BCSI in the cloud, the answer is very easy: Lows don’t have BCSI, or more specifically they don’t have any requirements that apply to it. I assume that the auditors won’t object if an entity with Low assets tells them they’ve stored information about their Low BCS in the cloud – and if they do object, they can’t do anything about it.

Regarding whether Lows can outsource their Low BCS to the cloud, one again I think the answer is the same as for Highs and Mediums: Doing this will be met with a lot of skepticism on the part of your Regional Entity, whether or not it actually is compliant.

Second, what about certifications? If a cloud provider has a SOC 2 certification – or another cert like FedRAMP – is that good enough? The answer to this question is a lot like the one for encryption. If your cloud provider has a certification, this will certainly go a long way toward establishing that your BCSI is safe there. However, you still have to comply with the four requirement parts. It is possible that your Regional Entity will accept the certification as compliance evidence, but you certainly can’t just point to the cert and say “Here, that’s all you need to know.” You still have to document why the cert should be used as evidence of compliance with each of the four requirement parts.

My third new topic is, I’ll admit, a little arcane. But I do want to point out that a strict reading of CIP-002 R1 seems to indicate that the entity is free to outsource their BCS themselves to the cloud or to any other third party, completely consequence-free (ironically, this doesn’t apply to BCSI. So as far as CIP compliance goes, it’s fine if you outsource your entire SCADA system to Al Quaeda, just as long as you don’t store any of the BCSI with them).

The reason for this is found in how CIP-002-5.1a R1 works. The requirement begins by providing a list of six types of “assets” that need to be considered – Control Centers, Transmission substations, etc. The meat of the requirement is found in R1.1 through R1.3. R1.1 tells the entity to identify any High impact BCS at each asset, meaning any of the six types just listed. R1.2 does the same for Medium BCS, while R1.3 says the entity needs to identify assets that contain Low impact BCS. In all three cases, the universe of assets is confined to the six types. Any BCS not located at one of those assets aren’t in scope for CIP (even though they meet the definition of BCS).

So what if an entity has outsourced their BCS to the cloud (or any other third party, for that matter)? It is a safe bet that the BCS aren’t located at one of the six asset types listed in R1 (I don’t know too many cloud providers that locate their data centers at generating plants, for example). What is their status for CIP? They’re completely out of scope. As I said, you can locate BCS anywhere you want, with whomever you want, but they are only in scope if they’re physically located at one of the six asset types.

However, I also refer you back to bullet point 7 above – especially the part about not making any friends at NERC or your Region. Don’t try this at home, kids!


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] My poster child for a CIP problem is the problem of what CIP-002 R1 (and Attachment 1) means. I first discovered that the wording was flat-out contradictory when I wrote this post in 2013. I have since written over 50 posts addressing this question from one angle or another – but of course there simply is no definitive answer to it.