Thursday, April 27, 2017

Why the Canary Died

In early January, I wrote a post comparing the two CIP Standards Drafting Teams currently operating: the CIP Modifications SDT and the Supply Chain Security SDT. In that post, I made the argument that one team (the Supply Chain team) is likely to achieve their goals fairly easily and on time, while the other team isn’t likely ever to fully achieve their goals.

It is now time to update that observation; I have good news and bad news. The good news is I no longer see such a disparity between the two teams. The bad news is I am now quite sure that both of these teams won’t fully achieve their goals.

In fact, for the Supply Chain team, it is just about certain they won’t achieve their goal, which was to draft and get approved – by the NERC ballot body, the NERC Board of Trustees, and finally by FERC – probably the first non-defense mandatory supply chain security standard in the history of the universe.[i]

As I explained in a recent post, it is now very possible that CIP-013, the draft Supply Chain Security standard (along with related changes in three other CIP standards), will never be approved by a NERC ballot, and that NERC will for the first time have to invoke the Section 321 process in order to have a CIP-013 standard ready for FERC’s approval in September[ii] (of course, it is almost 100% certain that FERC won’t have a quorum in September, since they don’t have one now and no new Commissioners have even been nominated yet. This means the standard will probably just sit on their desk for many months, if not for a year or more, once it does get delivered to FERC. But that’s another story). How did this situation come to pass?

For the Supply Chain SDT, the post just linked will give you a good summary of why they likely won’t achieve their goal. So the question is: What has changed since January, when I was very optimistic that CIP-013 would quickly cruise to approval (at least on the second ballot. Approval on the first ballot rarely happens in NERC, and has never happened for any of the CIP standards)?

I admit that when I wrote the January post – which was before the first draft of CIP-013 was even posted for comment, let alone balloted – I assumed that everyone would love a non-prescriptive standard. After all, non-prescriptive, objective-based requirements allow the entity to determine for itself what is the best means of achieving the stated objective. Why wouldn’t NERC entities want that?

The two examples I use most often to illustrate the difference between prescriptive and objective-based requirements are CIP-007 R2 and CIP-007 R3. R2 (patch management) is very prescriptive; I know that many entities are spending large amounts of resources in trying to comply with it (and even then they aren’t able to be 100% compliant, as this post shows). On the other hand, R3 (malware protection) is very non-prescriptive, simply saying that the entity should “Deploy method(s) to deter, detect, or prevent malicious code” (R3.1) and “Mitigate the threat of detected malicious code” (R3.2).[iii]

There has been a large amount of grumbling about R2 – and as I’ve said in another post, one entity told me that half of all the compliance documentation they’re producing in their control centers is tied to that one requirement – vs. almost no grumbling (that I’ve heard of) about R3. This is especially true given that R3 replaced the anti-malware requirement in CIP v3, which required entities to take a Technical Feasibility Exception for every switch, router, PLC etc. that wouldn’t take antivirus software. Given that NERC entities seem to prefer non-prescriptive requirements once they’ve started using them, I just assumed people would be quite happy with the first draft of CIP-013, where all of the requirements were much more like CIP-007 R3 than CIP-007 R2.

However, the first draft of CIP-013 when down to ignominious defeat, garnering a whopping 10% positive vote. While the comments revealed many reasons for the overwhelmingly negative vote, one important theme was that many CIP compliance professionals at NERC entities don’t trust the auditors to be able to audit non-prescriptive requirements; they want to have the “safety” of requirements for which there can be no question how they can be audited – i.e. prescriptive requirements. This is usually expressed with a statement to the effect that the auditors shouldn’t be allowed to exercise “discretion”.

I wrote about this phenomenon in another recent post, where I pointed out that CIP auditors are already required to use a lot of discretion in their CIP audits, especially since v5 became enforceable. In effect, I said (although I didn’t explicitly make this point) that it would require absolutely perfectly-written requirements (and definitions) for the auditors not to need to use any discretion at all. Furthermore, I said the auditors currently have to use discretion mostly about things they’re not trained on, such as how to interpret the meaning of ambiguous or missing terms in quasi-legal definitions. Wouldn’t it be better to remove the need for them to make these kinds of judgments (by making the requirements non-prescriptive) and require them to instead exercise judgment in an area they do understand, namely cybersecurity – by only having to audit non-prescriptive CIP requirements?

To be honest, that post essentially said to the people who are worried about auditor discretion “You’re not being rational. Get over it.” I now realize that it would have been much more productive to assume these people are rational (which they of course are, especially considering experiences some entities had with auditors under CIP v3) and ask what could be a rational basis for their fears.

After much thought, I’ve decided the answer to this question is that, even if all of the CIP requirements were made non-prescriptive (objectives-based) tomorrow, the overall compliance regime for CIP – and for the other NERC standards – remains very prescriptive. So even though all the requirements in a new standard like CIP-013 are non-prescriptive, the fact that the overall compliance regime remains the same old prescriptive one will almost inevitably lead to fears – grounded or not - that the auditors will have a tendency to fall back to their old ways and audit non-prescriptive standards as if they were prescriptive ones. There needs to be a different compliance regime.

Before I describe the new compliance regime I’m proposing, let me describe how I view the current NERC compliance regime:

  1. NERC standards are designed to protect the Bulk Electric System from cascading outages caused by errors of omission or commission on the part of electric utilities and other participants in the BES.
  2. The requirements in those standards dictate certain actions which are either required or prohibited.
  3. If an entity is found to have violated one of those requirements – that is, they have either not performed a required act or performed a forbidden act – they are subject to fines and need to implement controls designed to prevent recurrence of the violation.[iv]

I want to say now that I am not criticizing this regime, which has been in place since NERC was founded in 1967.[v] In fact, I’ll certainly stipulate that it’s the only compliance regime that could successfully accomplish NERC’s original purpose. The problem is that cyber security is very different from the reliability issues addressed by the original NERC standards, as well as what are called the NERC 693 standards today. No matter how many times NERC officials may swear on a stack of Bibles that the CIP standards are reliability standards just like all of the other ones, that doesn’t change the fact: cyber security is very different from security from cascading BES outages. The compliance regime for CIP can and should be different from the regime for the other NERC standards.[vi]

What should this new regime look like? In my last post, I listed six principles that any regime of mandatory cyber standards for critical infrastructure should follow. Since that post was nominally about regulation of natural gas pipelines, I will repeat them here:

  1. The process being protected needs to be clearly defined (the Bulk Electric System, the interstate gas transmission system, a safe water supply for City X, etc).
  2. The compliance regime must be threat-based, meaning there needs to be a list of threats that each entity should address (or else demonstrate why a particular threat doesn’t apply to it).
  3. The list of threats needs to be regularly updated, as new threats emerge and perhaps some old ones become less important.
  4. While no particular steps will be prescribed to mitigate any threat, the entity will need to be able to show that what they have done to mitigate each threat has been effective[vii].
  5. Mitigations need to apply to all assets and cyber assets in the entity’s control, although the degree of mitigation required will depend on the risk that misuse or loss of the particular asset or cyber asset poses to the process being protected.
  6. It should be up to the entity to determine how it will prioritize its expenditures (expenditures of money and of time) on these threats, although it will need to document how it determined its prioritization.

I hope you can see why the fact that the three principles of the current NERC regime still apply to enforcement of all CIP standards, including CIP-013, means that NERC entities are rightly going to be suspicious of any standard, no matter how non-prescriptive its wording, that will be enforced under that regime (and if you can’t, well, it’s late at night now and I’m tired from writing this post – almost as tired as you are from reading it! I’m sure I’ll come back to this topic in future posts, and at least in an upcoming book, which – although I have yet to put a single word on paper for it – has already been at least half written in posts like this one).

And this, folks, is the canary in the coal mine, which has just keeled over dead: It’s clear (to me, anyway) that no new compliance area (like supply chain or virtualization) can be incorporated into CIP unless this is done with non-prescriptive requirements. Yet it is also now clear to me that no new non-prescriptive standards (or requirements) will ever be freely accepted by the NERC membership until the CIP compliance regime itself has changed. I will be watching to see what happens.

The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] Well, at least of the solar system. With so many possibly habitable planets outside of the solar system, it’s almost inevitable that on at least one of them such a standard has already been developed, although I will guess that on at least one other planet the effort to develop such a standard has collapsed in confusion and recrimination. And we’re not even talking about the uncountable number of alternative universes, where it’s almost inevitable there is not only at least one planet with a continent called North America and an organization called the North American Electric Reliability Corporation, but further that this organization has a CEO named Gerry Cauley. Such is the power of having an infinite number of instances, with no possible means of verification for your assertions.

[ii] As the post just referenced points out, there is the possibility that a large push by a major industry trade organization will put the next (and last) ballot over the top, despite the fact that so many of the yes votes will be by entities that strongly opposed the first draft and still have substantial misgivings about the second. Since this would technically count as a ballot victory for CIP-013, some might say that this constituted the SDT’s fulfillment of their mission. But I think the mission of the SDT is to develop a standard that will win the outright approval of 68% of the ballot body, not a grudging approval based on the idea of “Well, it’s either this or have the Standards Committee write the standard, and go against the strong wishes of our trade organization and our CEO”.

[iii] There is also a third requirement part (R3.3) that states the entity should “For those methods identified in Part 3.1 that use signatures or patterns, have a process for the update of the signatures or patterns.” Even this part is non-prescriptive, of course, since it also simply states an objective and requires the entity to have a process to address the objective.

[iv] You may wonder where Risk-Based CMEP (formerly known as the Reliability Assurance initiative) fits in with the three principles of NERC compliance I’ve just listed. I haven’t thought about this too closely yet, but I will point out that the risks addressed by RAI are compliance risks – i.e. the risk that an entity will violate a NERC requirement. You can think of RAI as kind of an overlay to the three principles of the NERC compliance regime; it doesn’t modify them, but it’s designed to lower risk of violations, and perhaps of cascading outages caused by a NERC entity’s doing or not doing something. It doesn't at all change the prescriptive nature of the current NERC CIP compliance regime.

[v] Of course, NERC standards weren’t mandatory until 2007, but I believe the principles of compliance were the same before and after that date.

[vi] The question may now occur to you: Is Tom advocating that the CIP standards be removed from NERC’s purview? I’m not necessarily saying that, although I reserve the right to change my mind on that point in the future. It seems to me that NERC has already put in place much of what is needed to have a successful program for regulating the cyber security of the BES, not the least of which is a large team of auditors – some of whom are very competent – that understand both the cyber and electrical issues involved with auditing compliance with mandatory cyber regulation of the BES. What isn’t in place now is a compliance regime that would be very different from the current CMEP, which is described by the six principles listed in this post. I believe that technically speaking this regime could be built within the NERC organizational framework – you might have to have two CMEPs, for example. However, I’m not sure that politically this will be possible within the NERC organization itself; there are few organizations that are flexible enough to undergo such a wrenching change. I certainly hope NERC proves to be the exception to this rule, since I believe there is no alternative to implementing the changes to the CIP compliance regime that I’m proposing. But if NERC can’t do it, some other organization will have to.

[vii] My eyes were opened at the RF CIP workshop this past week, when Lew Folkerth pointed out that the key to being able to audit non-prescriptive requirements is for the entity to have to demonstrate that the measures they took were effective. I will do a post on this soon.

Sunday, April 23, 2017

Cyber Regulation for Natural Gas Pipelines (and for the BES)

Probably the next most critical infrastructure after electric power in North America is natural gas. Like power, there is a nationwide gas transmission system whose loss would affect homes and businesses almost like an electrical outage would. And with the increasing dependence on natural gas for generation, a widespread cyber attack on gas pipelines would be an attack on the electric power system as well.

There is currently a fairly perfunctory cyber security regulation for gas pipelines, promulgated by the TSA (yes, the same people who make you take off your shoes at the airport!), but I know a lot of people think much more is needed. The question then becomes (and I’ve been asked this several times) “Would NERC CIP be a good model for gas pipeline regulations?”

Of course, I think most people in the power industry would just emit a hearty laugh if asked this question, and I’d have to agree with them – it’s hard to imagine inflicting the current CIP compliance regime on any industry except perhaps an enemy’s. But the question then becomes: “What would be a good model for cyber regulation for gas pipelines?

When recently asked this question, I gave it a little thought, then realized the answer was quite simple: Whatever would be the right solution for the electric power industry would be the right solution for any critical infrastructure. Whatever would work for one critical infrastructure should work for all of them.

But what would work for the power industry? If you’ve been reading this blog for a while, you’ve seen this question addressed tangentially in various ways, but never set forth in one place as a specific program. As I’ve mentioned previously, I am now discussing writing a book with a couple of co-authors, which will address this question in detail. But the main reason I haven’t attempted a full frontal assault on this question in my blog is that I haven’t felt I could succinctly articulate an answer.

Until now. I do believe I can now articulate what a critical infrastructure cyber security regulation should look like in six sentences (OK, maybe it’s seven). I will list them here without justification and without detail on how they might be implemented; for that, you’ll have to wait for the book, although I’m sure I’ll sketch out a lot of the details in future posts. Of course, I’d welcome any comments or questions about what I say below – I’ll try to answer using whatever I know at the current time; I’d also like to hear your opinions on whether this sounds like the right approach or not.

In my humble opinion, a workable cyber security compliance regime for any critical infrastructure sector needs to be based on six principles:

  1. The process being protected needs to be clearly defined (the Bulk Electric System, the interstate gas transmission system, a safe water supply for City X, etc).
  2. The compliance regime must be threat-based, meaning there needs to be a list of threats that each entity should address (or else demonstrate why a particular threat doesn’t apply to it).
  3. The list of threats needs to be regularly updated, as new threats emerge and perhaps some old ones become less important.
  4. While no particular steps will be prescribed to mitigate any threat, the entity will need to be able to show that what they have done to mitigate each threat has been effective[i].
  5. Mitigations need to apply to all assets and cyber assets in the entity’s control, although the degree of mitigation required will depend on the risk that misuse or loss of the particular asset or cyber asset poses to the process being protected.
  6. It should be up to the entity to determine how it will prioritize its expenditures (expenditures of money and of time) on these threats, although it will need to document how it determined its prioritization.

Of course, I’m not going to say now that I won’t ever add to or subtract from this list of principles. But I think they’re a good start.

The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] My eyes were opened at the RF CIP workshop this past week, when Lew Folkerth pointed out that the key to being able to audit non-prescriptive requirements is for the entity to have to demonstrate that the measures they took were effective. I will do a post on this soon.

Friday, April 21, 2017

The News from RF: NERC Readies the Nuclear Option

I attended RF’s spring CIP workshop in Baltimore this week. I have been to a lot of regional CIP meetings over the past 8+ years, but this was the best I’ve seen so far. I kept thinking there would be at least one boring presentation where I could get some email done, but it never happened all day! I will have a number of posts on things I learned during that day (probably not consecutively). This is the first, and perhaps the most important.

One of the presentations was by Cory Sellers, the Chair of the Supply Chain Security Standards Drafting Team. Cory gave a very good rundown on a) the objections that were raised to the first draft of CIP-013, which was roundly voted down by the NERC ballot body; and b) the changes that the drafting team is working on (with emphasis on the present progressive tense. He wasn’t kidding when he said changes were being made as he spoke – in fact, new versions were sent out to the Plus List the meeting was still in session), which should be posted for a new ballot in a few weeks.

It was quite interesting to see all the changes that are being made – to CIP-013 as well as to CIP-003, CIP-005 and CIP-010. I can’t summarize them here, but I will say they are quite ambitious and have a lot of moving parts. But my concern wasn’t so much the substance of the new changes, but the timeline for approval.

At the end of his presentation (which included taking a number of questions from the room – Corey certainly wasn’t trying to hide anything!), I asked him the question I was most concerned about. Here is the situation that prompted my question:

  1. Obviously, the first draft of CIP-013 failed miserably, receiving only about 10% positive votes.
  2. The next posting will need 68% to pass (I wasn’t sure about the exact number, but Corey readily supplied it to me). In the best of circumstances, it would be very difficult to go from 10% to 68% in one ballot. And frankly, with the large number of changes, and especially the fact that changes are being made to three existing standards as well as to CIP-013 (plus the fact that two new terms are used – “vendor” and “machine-to-machine remote access” - for which there will be no vote on a definition), it seems especially unlikely that the next ballot will pass. This means it will very likely take at least one additional ballot before it passes (both CIP v5 and v6 took at least three ballots to pass, and in both cases I believe there was a fourth ballot to clean up the wording).
  3. At the same time, the deadline remains September to a) have the revised standards approved by the ballot body; b) have the NERC Board of Trustees approve them; and finally c) file them with FERC. And since the BoT’s last meeting before September is in mid-August, the changes need to be approved by the ballot body before then.
  4. If you do the math, it’s quickly evident there’s no chance to have a third ballot, should the second one fail. So it appears very likely NERC will miss FERC’s deadline.
  5. But despite all of this, Corey said that NERC had assured him it would not miss the deadline, and FERC would have their supply chain security standard (and related changes in other standards) in September.

Given this, the question would naturally be “Why do you seem to believe that NERC can make the deadline, given that it’s very likely this won’t have been approved by the NERC ballot body by then?”  However, I actually asked Corey a different question: Did he know about Section 321 of the NERC Rules of Procedure? He readily admitted he knew that section quite well, and had been discussing it a lot lately; I was not surprised to hear that, because I heard last week (from a very reliable source), that NERC was seriously discussing invoking Section 321 for the first time – for CIP-013 and the associated changes. Corey said he had deliberately not brought up Section 321 in his presentation, but he was glad I had.

What is this mysterious Section 321? You can read it yourself, but it essentially allows the NERC BoT, in the event that the normal balloting process has not yet produced a draft standard(s) that, in the Board’s opinion, will satisfy an order from a regulatory body (which means FERC, here in the US), to have the Standards Committee draft one that will satisfy the order.

The wording of Section 321 is much more oriented to the case where a standard has been approved by the ballot body, yet is inadequate for some reason; in this case, we’re talking about a deadline being missed. However, I have no doubt – and Corey does not seem to either – that 321 could be made to apply to this case. I doubt the BoT will have a big problem with the wording of the new standard and the changes to existing standards; the problem is that there isn’t enough time to go through the normal approval process before FERC’s deadline.

So the bottom line is: This next ballot will very likely be the last one for the supply chain standard. If it passes, the current wording (with some legal clean-up) will be approved and submitted to the Board in August. If the ballot fails, then it will be up to the BoT and the Standards Committee to determine what the wording should be – and whatever they decide on will still be submitted to the Board in August. Of course, since these committees are both made up of industry members, it’s not likely that what they ultimately approve will be hugely different from the second draft. In fact, I imagine they might also consider the comments that are made in the second round of balloting, and use those to improve on that draft. So I’m not expecting the final version of CIP-013 to be some Frankenstein freak that nobody will like.

But this isn’t the end of the story. I learned from one of the participants at the meeting that the next ballot is likely to pass after all, given the very strong support being provided by a major industry group. If so, the Section 321 “nuclear option” might be put back on the shelf for another day. But whether or not 321 is invoked, it’s pretty clear to me that the normal balloting process is being short-circuited – in the one case by 321 being invoked, in the other by the substantial pressure this industry group is exerting on their members to vote yes, in spite of lingering misgivings they may have. In other words, it won’t be a completely free-will approval.

The real problem here is the fact that FERC only gave NERC a year to develop and approve the new standard. This was definitely not enough time, as was eloquently expressed by Commissioner (and now acting Chairman) LaFleur in her power dissenting opinion – and by me in my post on Order 829 (which includes a summary of Commissioner LaFleur’s argument).

I suggested at the time – both in my blog and at I believe two NERC CIPC meetings – that NERC should petition FERC to get the deadline extended, to no avail. I suggested this to Cory as well, but he assured me that NERC wasn’t going to do that (it’s not clear if FERC could approve the deadline extension at this point, since they don’t have a quorum. But they do have some powers to take action, and given that Cheryl LaFleur is now the acting Chairman, I would think she would be inclined to grant this if at all possible).

But it seems the decision has been made not to even ask for an extension. This is all quite unfortunate, of course. Does anyone doubt that another year, or even half a year, of debate and modification of CIP-013 would result in a much better standard? Or to word this differently, is there anybody who seriously believes that the SDT has such amazing listening and writing skills that they will be able to come up with exactly what is needed to satisfy everybody in their upcoming draft, and most of the 90% who voted no on the first draft will now be happy as clams with every word they’re voting on in the second draft? Please raise your hand if so….Yes, I didn’t think there would be anybody.

In any case, we’ll get what we’ll get.  It will certainly be decent, but it’s unfortunate it can’t be really good. I see this as the symptom of a bigger problem – the canary in the coal mine that just died, in the process revealing a serious condition that threatens the miners themselves. More on this in another post coming soon to a blog near you.

The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

Wednesday, April 19, 2017

Lew Folkerth’s “Zones of Authority”

For what seems the 50th time, I’m starting a post by referring to an article by Lew Folkerth of Reliability First, found in the March-April issue  of RF’s newsletter (as always, under the heading “The Lighthouse”). As has been the case on every previous occasion, Lew has put his finger on an important issue in the current NERC CIP compliance regime.

The article[i] specifically addresses the question whether the use of virtualization is permitted by the current CIP standards, and if so what is permitted and how it should be implemented in a compliant manner. Lew answers the first question by saying virtualization is neither permitted nor prohibited. This is of course not earth-shattering news, but Lew quickly moves on to something that is quite interesting: the concept (which he invented, at least as far as CIP goes) of “zones of authority”.

Lew brings this concept up for the following reason: Even though there are no actual CIP requirements that apply to virtualized environments, this hasn’t stopped NERC and the regions from coming up with a few “guiding principles” for virtualization; probably the most important of these is that there should not be “mixed-trust” virtualized environments. Briefly, this implies that a host server for multiple VMs shouldn’t host both ESP and non-ESP devices, and a switch with multiple VLANs shouldn’t host both ESP and non-ESP networks.

Lew says (or implies) that auditors are having differences of opinion with entities on the security of mixed-trust switches. It seems these entities have switches that implement both ESP and non-ESP VLANs. When the auditors tell them this isn’t secure, the entities point out that the non-ESP VLANs have just as good security as the ESPs do[ii]. So why aren’t they safe?

Lew doesn’t dispute that it is very likely that, from a pure security point of view, these entities are right: mixed-trust switches are just as safe as switches that only implement ESPs. But this is where zones of authority come in. Lew points out that the auditor only can look at CIP assets, like ESPs. Anything else, such as a VLAN that isn’t an ESP, is completely out of his or her purview; the auditor just has to assume these are completely insecure, and thus shouldn’t be found on the same switch as ESP VLANs.

Of course, Lew is completely right about this: The NERC CIP standards only take account of BES Cyber Systems and their associated ESPs, PCAs, EACMS, etc. Anything that isn’t one of these is completely out of scope. This is a double-edged sword, of course. On the one hand, the auditors can’t look at what isn’t in an ESP, which limits the kinds of evidence an entity can show them. On the other hand, the entity has no compliance obligations for something that isn’t in an ESP or that protects ESP devices, like an EACMS or PACS. My guess is most entities, if told they could force CIP auditors to consider security controls they have implemented for non-ESP networks, but only in return for having at least some of the cyber assets on those non-ESP networks fall into scope for CIP, would say “Thanks but no thanks. We’ll leave things as they are.”

And this is exactly the issue: It would make a lot of sense from a security point of view to have all cyber assets and networks that are in the entity’s control be in scope for CIP[iii]. Just look at the Ukraine attacks, which started with phishing attacks on the IT networks. It was only after completely taking over those networks that the attackers were able to find a way into the relays that were their ultimate targets, even though they were on the OT network.[iv] But the idea of having all or even just a few of their IT cyber assets suddenly become BES Cyber Systems would cause a lot of CIP professionals to bash their head against the nearest hard object, or seriously consider making a career change to fast food service.

So are we stuck here? While it would certainly be a good idea to have all networks under the NERC entity’s control be in scope for CIP, I would agree with my hypothetical CIP professional that the burden of making these networks ESPs, and making even a few of their cyber assets be BES Cyber Systems, would be too great to justify the benefit: the enhanced ability to demonstrate their security posture to an auditor.

But we’re not stuck here. Were we to rewrite the CIP requirements so that they weren’t prescriptive (i.e., so they were objectives-based), and enclose them in an appropriate “wrapper” so that the entity was simply required to address a certain set of threats (and this list of threats were regularly updated, outside of the standards development process), I think we could safely expand CIP to include all networks under an entity’s control. But there would still have to be an appropriate classification of cyber assets in scope, depending on how “near” they are to the BES. Those that are directly involved in control of the BES (as BCS are in the current standards) would have to have stronger controls than those whose involvement is only indirect (such as the IT network assets at the Ukrainian utilities that were attacked).

Of course, I’ve made this point about the need to rewrite CIP previously. And I will continue to make it, including in my next two posts.

The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] I recommend you read the article yourself, of course, even though I will summarize parts of it here. Don’t worry – it’s just a page. If I had written the article, it would have at least been the length of one chapter in a book, if not an entire book.

[ii] I’m speculating here. I’m not sure if auditors are actually running into this problem already, or if Lew just anticipates that they will. Either way, Lew’s discussion provides an excellent window into a real problem.

[iii] I recently was introduced to another CIP blogger, Larry G, who publishes a blog called NERD CIP; while I haven’t had time to spend a lot of quality time with his posts yet, they look quite insightful. In particular, he recently wrote a post that starts off with a consideration of the same Lew Folkerth column that this post started with (although he has a different take on it). In that post, Larry points to the PCI definition of Trusted Network, which reads “Network of an organization that is within the organization’s ability to control or manage.” Since all Trusted Networks are potentially in scope for PCI, they will all need to be considered – either because they themselves hold payment card data or because their compromise could lead to compromise of the networks that do hold that data. In fact, I will venture a guess that CIP is the only set of cyber security standards that limits itself to a subset of the networks actually controlled by the entity.

[iv] And please don’t tell me that it was only because the three Ukrainian utilities had poor separation between their IT and OT networks (lack of two-factor authentication, lack of intermediate server, etc) that the latter were penetrated. When attackers have complete control of the IT network for months, they will one way or the other be able to find their way into the OT network, no matter how good the security may be. How about this: One day all of the relay engineers get an email from their boss (when he is on vacation) that due to some need for emergency maintenance they need to open all of the circuit breakers at a certain time that day. Some might not follow this instruction; others might do it. In any case, there would be a coordinated outage. If this happened at a number of utilities, it could cause a serious BES problem.

Monday, April 17, 2017

More on CIP and the Cloud

In this recent post, I came to the conclusion – led there by an auditor – that NERC entities can entrust BES Cyber System Information (for Medium and High impact BES Cyber Systems) to cloud providers, as long as they comply with four requirement parts: CIP-011 R1.2, CIP-004 R4.1.3, CIP-004 R4.4 and CIP-004 R5.3.

However, it seems I may have missed two requirement parts. At the WECC CIP User Group meeting in Denver recently, auditor Morgan King did a very good presentation on CIP and virtualization (for the slides, go to this page and click on the presentation with his name on it). While the virtualization discussion was very good, he also brought up the cloud (since the two technologies go hand in hand). On slides 23 and 24, he lists six requirement parts that apply to BCSI in the cloud. Besides the four I just listed, he also includes CIP-004 R2.1.5 and CIP-011 R2.1. So I recommend you make sure you’re compliant with all six of these.

Since all six of these requirement parts will require that the cloud provider have certain policies and procedures in place – and that they maintain the same level of documentation that is required of the entity itself - I know some readers will object that this places too big a burden on the entity itself, that they will in effect have to audit the cloud provider. If you have this objection, I recommend you look at this post, which points out that a third party audit like SOC 2 could well be considered sufficient evidence of compliance.

And I also recommend this post, which points out that encrypting the data in the cloud is a good mitigation measure for compliance with some of the six requirement parts, but it doesn’t remove the obligation to comply with those parts. You will still have to provide evidence that you and your cloud provider are complying with each part.

Another thing you want to keep in mind is that your Information Protection Plan from CIP-011 R1.2 requires measures to address data in transit, not just at rest. This means you might need to encrypt the data before it goes to the cloud provider, depending on how you send it.

The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

Wednesday, April 12, 2017

A Great Supply Chain Security Guide

Early this year, I was invited to speak at the EastWest Institute’s Global Cybersecurity Cooperation Summit in Berkeley, CA in March – specifically, at the meeting of the EWI’s Breakthrough Group on Increasing the Global Availability and Use of Secure ICT[i] Products and Services. I had certainly heard of the EastWest Institute previously, but only in the context of their weighing in on ponderous global issues like war and peace. I didn’t realize that cybersecurity would rank as a concern worthy of their attention, but this is obviously the case, and has been for many years.

I believe the reason I was invited to speak at this meeting was the various posts I have written on FERC Order 829  and the subsequent development of CIP-013, the new Supply Chain Security standard (of course, that standard is very much still in the development process. Moreover, three of the existing CIP standards are now being modified to include requirements from the first draft of CIP-013, that commenters on the first draft felt would be better included in those standards).

Last year the Breakthrough Group published an ICT Buyers Security Guide. I read the guide and discussed it with the members of the group before the meeting. I am quite impressed with this document, for two important reasons. First, it is concise (the main discussion takes up about 22 pages). As such, it contrasts vividly with NIST 800-161, NIST’s supply-chain security guide. Like most NIST publications, 800-161 tries to exhaustively (and exhaustingly!) cover every possible aspect of its subject, supply chain security. Unfortunately, the result is that non-governmental organizations, who aren’t required to follow it, must put in a considerable amount of effort just to decide which controls they should focus on (and also, for each control, how they should address it).

Second, the guide is very practical. Perhaps because it is concise, it is focused on providing guidelines that organizations can immediately put into practice. These guidelines are mostly in the form of 25 questions that can be asked of suppliers, like “Are third-party inputs evaluated for security prior to selection and tracked/validated upon entering the supply chain?” and “How are products and services continually tested for security vulnerabilities?”[ii]

If you’re wondering how this Guide might fit in with CIP-013, I would think some or all of these questions might be incorporated into your entity’s process for compliance with CIP-013 R1.1.1 (at least, as that requirement part stood in the first draft, posted in January).

So I recommend that you read this document, and consider how it might help your organization achieve two goals: a) Improve your supply chain security posture; and b) Comply with CIP-013.

I also want to point out that the EWI supply chain security group is now working on a major revision to the Guide. If you might be interested in participating in that process (which includes phone conferences and in-person meetings), let me know and I’ll put you in touch with the leader of the group.

The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] ICT stands for Information and Communications Technology.

[ii] Of course, it’s up to the organization to determine which questions to ask of which suppliers. One big difference between this Guide and both CIP-013 and NIST 800-161 is that this guide focuses entirely on what suppliers do and don’t do. It doesn’t address other areas that are under the entity’s control, such as secure deployment and vendor remote access control. Of course, some might argue that these topics aren’t really part of supply chain security. And they probably wouldn’t be in CIP-013 either, except for the fact that FERC ordered they be included.

Wednesday, April 5, 2017

The Prisoner’s Dilemma

I remember reading a few news stories about a long-time prisoner who is released and finds that life on the outside is much harder for him than it was in prison. I don’t think too many of these people actually ask to be re-admitted to prison, but it certainly stands to reason that someone who has lived almost his whole adult life there might view his complete lack of any lifestyle choice at all while behind bars to be in many ways preferable to the need to make a myriad of choices, which all of us who live outside of prison have to do every day.

This is perhaps a little bit of an overdrawn analogy, but I have more than once thought of this “prisoner’s dilemma[i]”as akin to that of NERC compliance professionals who object to non-prescriptive, results-based standards for cyber security (such as the CIP-013 standard for supply chain security) on the grounds that they will not work in practice, compared with good ol’-fashioned prescriptive standards.

Why do some NERC CIP compliance professionals feel that non-prescriptive requirements won’t work? If you haven’t been in the trenches of CIP compliance for the past say three or four years at least, you may not understand why anyone would prefer prescriptive requirements, at least when it comes to CIP. For example, who would prefer CIP-007 R2.2 - which mandates that every 35 days the entity must, for every device in its ESPs, check with the patch source to determine whether a new patch has been released - to CIP-007 R3.1, which simply directs the entity to “Deploy method(s) to deter, detect, or prevent malicious code”?

Well, it seems a lot of people do in effect prefer R2.2. The fact is that many NERC compliance professionals – in particular ones who have had to deal with CIP for a number of years – shrink back in horror from the idea that all of the CIP requirements should be like CIP-007 R3.1, not R2.2. I was reminded of this very forcefully when I recently attended a meeting of the team drafting CIP-013, the new supply chain security standard, in San Antonio.

The team met just after their first draft of that standard was soundly rejected by the NERC ballot body. All of the requirements in that draft are non-prescriptive[ii], and some commenters indicated they were very worried about giving CIP auditors too much “discretion” in how they conduct their audits. And there certainly have been a lot of stories – more during the CIP v1-v3 days than currently – of auditors who have made their own interpretations of some of the CIP requirements, and identified Potential Violations in cases where other auditors would have found compliance.

In other words, these people seem to be saying “We need to keep these auditors on a very tight leash, so they don’t have any discretion in how they audit the CIP requirements. If all the NERC CIP requirements are made non-prescriptive, the auditors will no longer be under anyone’s control, and will be able to issue PVs for anything they perceive to be a bad cyber security practice, whether or not it’s allowed in the requirement they’re auditing.”

I certainly do have a lot of sympathy for these people, since I know that at least some of their horror stories are real and have made their lives miserable, at least for a time. However, I think these people are basing their arguments on the wrong premises. In fact, auditors are now virtually required to use discretion; they no longer have an option not to do so. Let me elaborate.

As I discussed in a post last year (not too coincidentally, one which also referenced discussions about CIP-013), many if not most of the prescriptive CIP requirements include particular details – like “within 35 days” in CIP-007 R2 – that aren’t dictated by any principles of cyber security or electrical engineering, but which simply are there because the drafters thought they needed some number - in order to have a requirement that could be “unambiguously” audited.  So there’s a strange logic here that goes something like

  1. We need prescriptive requirements so that auditors can’t use any discretion.
  2. Prescriptive requirements can only be audited if they have precise criteria, preferably numerical.
  3. Therefore, the requirements should include specific numbers (or other goals), even if there is no compelling reason why any particular number should be chosen.

To word this another way, a lot of the details in prescriptive requirements are there simply because the requirements are prescriptive, not because they are dictated by say cyber security practices or the laws of physics.[iii] Lots of money and effort are being expended on meeting these arbitrary deadlines and numerical targets, when entities could deploy their limited cyber security budgets much more cost-effectively if they were given a goal to achieve and then allowed to choose – and defend to an auditor – whatever means of achieving that goal makes the most sense in their environment. And this means that NERC CIP compliance professionals who won’t support non-prescriptive requirements are in effect saying they’d much prefer to put up with the additional headache and expense of meeting these arbitrary targets, even when they contribute very little toward increased cyber security, rather than take the chance that an auditor might decide to judge a non-prescriptive requirement in an arbitrary way.

However, these people are missing a very important point: The auditors are already imposing their own (or their region’s) judgment regarding the meaning of particular CIP requirements, because of the many gaps, ambiguities and outright contradictions in the requirements. And this trend has simply increased with CIP versions 5 and 6. For example, in order to determine whether an electronic device is a Cyber Asset, the entity needs to know what the word “Programmable” means in the Cyber Asset definition; that word is undefined and is still heavily debated. And to determine whether a Cyber Asset is a BES Cyber Asset, the entity needs to understand what “impact the BES” means in the BCA definition – this phrase is also undefined[iv]. So the only way for the auditor to determine whether the entity is compliant with CIP-002 R1 (the requirement in question here) is to exercise judgment on whether the entity’s own interpretation of these terms was reasonable or not (although if they’re lucky their NERC Region will have provided them some guidance on this and other similar questions. But the regions don’t usually publicize their guidance, and different regions sometimes provide different guidance, if they provide any at all).

So which would you rather have: a prescriptive requirement like CIP-002 R1 that seems to be very precise but in fact requires a lot of under-the-table judgment, or a non-prescriptive requirement like CIP-007 R3, which states that the entity must “Deploy method(s) to deter, detect, or prevent malicious code”? Granted, the latter leaves much room for the auditor to exercise their own judgment, but this is judgment about cyber security issues – in this case, whether the methods deployed by the entity to prevent malicious code are appropriate, given the cyber assets being protected. And if, in the auditor’s judgment, the entity hasn’t deployed the right methods, they won’t simply issue a PV (unless, of course, the entity hasn’t deployed any method at all, or has deployed a method that could never possibly meet the objective of the requirement). Rather, the auditor will most likely issue an Area of Concern, stating that – while the entity did technically meet the requirement since they did deploy a “method” to prevent malicious code – the entity should strongly consider another method which, in the auditor’s opinion, would be more effective.

But consider the auditor’s judgment in auditing CIP-002 R1, discussed earlier. In interpreting what “Programmable” and “affect the BES” mean, the auditor isn’t applying his or her cyber security knowledge or experience, but simply guessing what the words mean, based on the context of the definition, how it is used, etc. This is judgment about the semantics of legal terms. Yet I don’t think a legal semantics background is required of CIP auditors, while a cyber security background certainly is required. So in which area would you prefer to have the auditor exercise judgment: semantics or cyber security? In auditing the prescriptive CIP requirements, the auditor is not supposed to exercise cyber security judgment, but he or she is in fact often required to exercise semantic judgment. In auditing non-prescriptive requirements like CIP-007 R3, the auditor is only required to exercise cyber security judgment – and they all should be able to do that by now.

Of course, you may point out that some NERC CIP auditors aren’t the world’s best cyber security experts. Or they may know cyber but they’re weak on industry knowledge. Or both. I’m sure this is the case, but these problems can be addressed through measures like longer on-the-job training periods and mentorship. But requiring cyber security auditors to exercise judgment about cyber security matters is a much better bet than asking them to exercise judgment on ambiguous (or in some cases missing) definitions and requirements.

This is one reason why I think the non-prescriptive route is the only good one for NERC from now on, regarding development of new CIP standards and requirements. And NERC (with prodding from FERC) seems to have made this decision already, since no new prescriptive NERC standard or requirement has been developed in the past three years (CIP-014, CIP-013 and CIP-003 R2 Section 3 are all non-prescriptive. I believe these are the only ones that have been developed new - although CIP-003 R2 Section 3 was a rewrite of a prescriptive requirement part).

But just making sure all new requirements are non-prescriptive isn’t enough. Some of the existing CIP v5 requirements are highly prescriptive. CIP-007 R2 is my poster child for a prescriptive requirement, but CIP-010 R1 isn’t too far behind. A lot of the other requirements are prescriptive as well. The fact that they are prescriptive imposes a big burden on NERC entities (one entity told me that literally half of all the compliance documentation they produce in their control centers is for CIP-007 R2 alone). More importantly, I believe there can be no further meaningful extension of the existing CIP standards – to areas like virtualization or the cloud – until all of the existing requirements are made non-prescriptive. More on this in my next post.[v]

The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] And in using this phrase, I’m not in any way referring to the famous “game” called the Prisoner’s Dilemma, which is much discussed in game theory.

[ii] “Results-based” would probably be a better phrase than “non-prescriptive”, since it emphasizes what cyber security requirements should do (require the entity to achieve a certain goal) rather than what they shouldn’t do (prescribe particular steps to achieve that goal). However, I hesitate to use the former phrase, since NERC uses it to describe the current CIP requirements, even though some of them (such as CIP-007 R2 and CIP-010 R1) are very far from being truly results-based.

[iii] Please note that I think almost all of the non-CIP NERC standards (i.e. the Operations and Planning, aka “693”, standards) probably do need to be prescriptive. For most of those standards, the specific goals and numerical targets are dictated in some way by the laws of physics. For some of those standards, a failure to do a certain thing at a certain time could very well lead to a cascading outage, as was the case in the 2003 Northeast blackout. But cyber security is statistical. Missing the availability of a security patch for one system for one month is not inevitably going to cause any problem, although not patching any systems for say a year or two will very possibly lead to a BES event. The entity needs to determine what makes the most sense in their environment and with their budget (since I don't know any NERC entity that has been given a blank check to cover anything they feel like spending on cyber security. They need to prioritize their spending so it yields the greatest possible security per dollar spent – as discussed in this post). Non-prescriptive requirements are the only way to do this.

[iv] Both of these issues are on the table for the CIP Revisions drafting team to address. By the time new definitions are drafted, balloted and commented on, redrafted, re-balloted and commented on, re-re-drafted and re-re-balloted and commented on, and then approved (or not) by FERC (assuming they have a quorum sometime in the not-to-distant future), it will be probably 3-4 years from now. In the meantime, auditors and entities need to muddle through, as they’ve been doing all along.

[v] I cringe a little when I say what will be in my “next post”, since more often than not some other topic comes up that seems more compelling, and the “next post” actually appears five or six posts later. In any case, a post on this topic is coming, whether it’s the next one or not.

Saturday, April 1, 2017

US to Deploy NERC CIP to Mexico!

In response to what was described as a “flood of cheap, poor-quality Mexican electrons crossing our border”, the US government has decided to “level the playing field” by requiring Mexican entities that generate or transmit electric power to comply with the NERC CIP Reliability Standards.

According to a White House spokesman, “The US electric power industry has suffered too long from unfair competition from Mexico. Millions and millions of workers have already lost their jobs because Mexican electrons are manufactured[i] by low-cost workers and then dumped over our border. We believe the best solution to this problem is to impose the same regulatory costs on the Mexican power industry that our industry faces.

“While there are a lot of regulations that apply in the US but not in Mexico, probably the fastest-growing is the NERC CIP standards for cyber security. One semi-reliable blogger has stated that there are at least a few large utilities – and many more smaller ones - that have spent in excess of 25 times as much on CIP version 5 compliance as they did on the previous version 3 (and don’t ask us what happened to version 4. I assume it was dropped because of unfair Mexican competition as well). Moreover, the amount spent will continue to grow as the scope of CIP is expanded. Enforcing NERC CIP in Mexico will bring their industry’s costs much closer to ours, so our wonderful American workers can once again compete.”

To learn more about this, I talked with Professor Sebastian Tombs of the University of Southern North Dakota at Velma, where he teaches part-time in the Extension Division. Professor Tombs said “I’ve been waiting for this to happen for a long time. For at least the last few years, it’s been clear to a number of us in the academic community that NERC CIP could be weaponized. It has sucked up a lot of resources at US and Canadian utilities, and it will clearly have the same effect in Mexico or any other country in which it might be deployed. I do think this is a drastic step to take, but I guess there are worse ones, such as invasion or use of nuclear weapons.”

I was also very fortunate to be granted a short interview with a senior White House official, who asked to remain anonymous. He said, “Don’t get me wrong. Mexicans are wonderful people. We’re not doing this because we don’t like Mexicans. But we have to protect our workers –they’re the world’s best, and they can make electrons like nobody else can. I would have preferred there were some sort of ‘extreme vetting’ procedure we could use for these Mexican electrons, so that only those that were the highest quality would be allowed into the US. But my people say – and believe me, I have the best minds working for me – that this would be very impractical. There are just too many electrons coming in.

“So we had to come up with another measure, and someone brought up NERC CIP. I of course had never heard of it, but when I learned about the huge burdens it’s placing on US utilities and independent generators, I thought it was only fair that the Mexicans should have to follow it, too. I realize it will cause a lot of suffering there, just as it has here. It would be nice if it could be reformed so that it were more cost-effective, as I’ve heard some blogger is advocating. Maybe I’ll get to NERC CIP reform once I’m finished with health care and tax reform, although I’m told that those two will seem like a piece of cake compared to CIP reform. But in the meantime, we need to make sure that our competitors don’t have an unfair advantage.”

At last report, the Mexican government was considering building a wall along the border to keep out the NERC auditors.

The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] Note from Tom: I would like to correct the White House and point out that electrons can be neither manufactured nor destroyed. I tried to call them, but I was unable to find a science advisor to talk to. I was told that my message sounded like "fake science".