Tuesday, February 21, 2017

Encrypting BCSI in the Cloud


In my most recent post I stated that I thought cloud storage of BES Cyber System Information was permitted by NERC CIP v5 and v6, and quoted a CIP auditor on what NERC entities (with High and/or Medium impact assets) needed to do to remain compliant with CIP if they do this.

The next day I received an email from Judy Koski of Tucson Electric Power, a NERC compliance professional I have known for many years. She pointed out “You have left out any mention of encrypted BCSI in the cloud.  If the information is encrypted in storage, the third party supplier does not have access, except to very limited personnel.  Does this not solve the problem?”

I immediately sent this question to the auditor who contributed to the previous post, and he quickly replied “I would argue that it is BCSI[i] and that the CIP-011-2 requirement to protect that information is achieved, in part, by encryption of the data at rest, what P1.2 refers to as in storage.  The fact that it is encrypted does not change the fact that the data is information about BCS.  So, yes, the other Requirements/Parts still apply.”

Since this auditor won’t ever use two words where one will suffice, I will “decrypt” his statement. He first points out that CIP-011-2 R1.2 requires the entity’s Information Protection Plan to include “Procedure(s) for protecting and securely handling BES Cyber System Information, including storage, transit, and use.” He agrees that encryption of BCSI while at rest at the cloud provider (or other third party) addresses the “storage” side of this, but that entities must also protect BCSI in transit and in use (of course, not necessarily with encryption, since there are many other ways to do this).

In addition to addressing the “transit” and “use” aspects of the above requirement, the auditor also pointed out that the three other requirement parts, included in a numbered list in my last post, still need to be complied with. Encryption won’t help with any of these, so you still have to address each of them.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.


[i] In my email to the auditor, I had speculated that perhaps the encrypted data wouldn’t be BSCI at all, since the definition of BCSI includes the statement “BES Cyber System Information does not include individual pieces of information that by themselves do not pose a threat or could not be used to allow unauthorized access to BES Cyber Systems…” I reasoned that encryption meant the information couldn’t be used to allow unauthorized access to BCS. The auditor rightly pointed out that it’s still BCSI even though it’s encrypted. The encryption is one control that can be used to block unauthorized access.

Sunday, February 19, 2017

A Break in the Cloud(s)? – Part I


At the beginning of January, I wrote a post stating that entities that utilize the cloud to store BES Cyber System Information are on shaky ground, and need to talk to their region before doing this (this problem only applies to NERC entities with High or Medium BES Cyber Systems, since they are the only ones that have to keep track of BCSI; I’ll say something about entities with only Low BCS in a future post). Since then I have come to realize that a number of entities are already storing BCSI in the cloud.

A typical scenario for how this situation comes to pass seems to be that a) the entity has engaged a cloud provider of a service like configuration management for all of their OT cyber systems (and maybe IT ones as well); b) either the decision to engage this provider was made without considering the CIP compliance implications, or those implications were simply ignored since the cost savings and improved ease of ownership were so huge; and c) whenever the CIP compliance people weighed in on this issue, they had to admit that CIP v5/v6 do not mention the cloud, so CIP’s position on storing BCSI in the cloud is very uncertain – if there even is any position at all. The result of all this is that the cost and ease of ownership reasons for using a cloud provider of a service like configuration management can far outweigh any reasons that may be adduced for not using a cloud-based service.

I must admit I have been in a kind of intellectual slumber on this issue, primarily because in CIP v3 there was no question that Critical Cyber Asset information could never be stored in the cloud. This is because CIP-003-3 R5 – which covered access to CCAI – required that the entity explicitly authorize all access to CCAI, and review all access annually to make sure it was still appropriate.

This requirement all but prohibited storage of CCAI in the cloud. Cloud providers usually have massive data centers with many thousands of servers (mostly virtual); hundreds of people walk into and out of these data centers every day or even every hour. Since any one of these people could potentially have at least physical access to the server where a particular piece of CCAI resides, it would require a huge effort to try to track who has access to that CCAI, let alone to authorize it and validate that authorization annually – plus, of course, removing access when the person leaves the organization. It is very unlikely that any cloud provider would agree with that.

But CIP v5 is quite different.   CIP-011-2 R1.2, Information Protection, simply requires the entity to document and implement an Information Protection Program that includes “Procedure(s) for protecting and securely handling BES Cyber System Information, including storage, transit, and use.”[i]

The Guidance for this requirement explains what the Standards Drafting Team had in mind when they drafted it. They made it clear that they weren’t ruling out the possibility that BCSI could be “shared with or used by” a third party (page 14 of CIP-011-2). Specifically, “The entity’s Information Protection Program should specify circumstances for sharing of BES Cyber System Information with and use by third parties, for example, use of a non-disclosure agreement.”

Since a cloud provider is definitely a “third party”, this quote seems to be saying that all a NERC entity needs to do, in order to store BCSI with a cloud provider, is to have an NDA in place. But most of the NDAs that I have seen are really focused on preventing employees of one organization, or the organization itself, from disclosing data belonging to another organization; this is important, but this is really just a small part of what we think of when we talk about protecting data. Is this really all that is required?

As I often do when I have questions like this, I emailed an auditor who understands CIP very well, and who has taught me a lot of what I claim to know about these standards. It turns out that CIP v5/v6 does have more to say about storing BCSI at third parties than what is found in CIP-011-2 R1. I will quote what he said almost verbatim:


“The crux of the matter is CIP-011-2, Requirement R1, Part 1.2, which requires the Registered Entity to have “Procedure(s) for protecting and securely handling BES Cyber System Information, including storage, transit, and use” for BES Cyber System Information associated with High and Medium Impact BCS and associated EACMS and PACS (it is interesting that PCA were left out of the applicability section.  I would be interested to know how often information about a PCA could be leveraged to craft an attack against the “protected” systems).

“So, I think there needs to be more than just an NDA.  If the Registered Entity is going to entrust their BCSI to a third party, I would expect the Registered Entity to have obtained, reviewed, and accepted the third-party’s protection controls, not just a signed boilerplate agreement to protect the information.  The Registered Entity cannot assign the risk and liability to the third party providing the information storage service.

You are correct, the Standard does not mandate anyone who has access shall undergo a PRA and training.  But there must be an “authorization for access” process.  Look at CIP-004-6, Requirement R4, Part 4.1.3 (“Process to authorize based on need, as determined by the Responsible Entity, except for CIP Exceptional Circumstances:  Access to designated storage locations, whether physical or electronic, for BES Cyber System Information”) and Part 4.4 (“Verify at least once every 15 calendar months that access to the designated storage locations for BES Cyber System Information, whether physical or electronic, are correct and are those that the Responsible Entity determines are necessary for performing assigned work functions”). 

“Additionally, look at CIP-004-6, Requirement R5, Part 5.3 (“For termination actions, revoke the individual’s access to the designated storage locations for BES Cyber System Information, whether physical or electronic (unless already revoked according to Requirement R5.1), by the end of the next calendar day following the effective date of the termination action”).  Just because the information is in the cloud does not relieve the Registered Entity of the responsibility to demonstrate compliance with these three Requirement Parts.  The third party will need to implement these controls and submit evidence of compliance to the Registered Entity (either direct evidence or an acceptable audit of the controls by a third party other than the service provider).”


To summarize, an entity with Medium and/or High impact BES Cyber Systems can store BCSI with a cloud provider. Per CIP-011-2 R1.2, the entity must have an Information Protection Plan that includes discussion of how BCSI stored at third parties will be protected. Since CIP-011 R1 is a non-prescriptive requirement, there are no prescribed contents for the IPP; it will be up to the entity and their auditor to determine whether the plan is reasonable or not. However, at a minimum:

  1. The provisions for third parties in the plan cannot simply require an NDA; there must be some provision for reviewing the controls put in place by the cloud provider (which isn’t the same as requiring the NERC entity to “audit” the cloud provider!);
  2. Per CIP-004-6 R4.1.3, the cloud provider will need to demonstrate to the entity that it has a process for authorizing access – perhaps not to the entity’s BCSI itself, but at least to the servers and physical locations where the BCSI is stored;
  3. Per CIP-004-6 R4.4, the provider must demonstrate that at least every 15 months it reviews this access to make sure it is appropriate; and
  4. Per CIP-004-6 R5.3, the provider must demonstrate that, for termination actions, access to the servers and physical locations storing the BCSI is removed by the end of the next calendar day.

So it seems I was far too pessimistic about NERC entities being able to store BCSI in the cloud when I wrote the post on this topic at the beginning of January. In fact, unlike almost all of the other questions of interpretation that I write about in this blog, I consider this to be actually a settled question – in this case, the CIP standards as currently written do provide an answer to the question whether NERC entities can store BCSI in the cloud. The answer is yes, as long as the above requirement parts are satisfied.

If you have a different opinion on this question, I’d like to hear it. I’d especially like to hear if there is something about cloud providers that makes them different from other third parties – meaning that taking the four steps just listed wouldn’t be sufficient to ensure CIP compliance.

However (there’s always a “however” in my posts, in case you never noticed that), this isn’t the full story regarding the cloud and CIP. There are at least four other questions that I know of:

First, there’s the question of outsourcing actual BES Cyber Systems to the cloud, as in the case of cloud-based SCADA. My opinion on this issue hasn’t changed since January: While I can’t say with absolute certainty that this will never be allowed under any circumstances for NERC entities with High and/or Medium BCS, I will say that any entity that wants to try this should without fail talk with their Region before moving ahead with the project.

Second, there’s the question of what constitutes BCSI. Is it possible that the NERC entity can store some information about its BCS, without storing actual BCSI? Third, there’s the question about assets containing Low impact BCS (and there are some scurrilous individuals that meet in dark corners and actually have the effrontery to refer to these as “Low impact assets”. Can you believe their chutzpah?). Since entities aren’t required to have a list of Low BCS, it seems they can’t identify Low BCSI either. Does that mean they’re off the hook, and they can take any steps they want in storing BCSI in the cloud?

Lastly, just because I’m a mean-spirited individual and don’t want to end a post on a positive note (after all, I have a reputation to uphold!), I do want to point out that there is an interesting loophole in the way CIP-002-5.1a R1 is written, that technically could mean that entities could outsource High and/or Medium impact BCS (i.e. the Cyber Assets themselves, not BCSI) to any third party they want – ISIS, al Qaeda, etc. – and not have to take any provision at all to protect them. I don’t particularly recommend you try this at home, kids, but I think it does show how the wording of R1 can lead to some strange situations.

I will try to address the second through last points in separate blog posts in the near future. Stay tuned to this channel! 


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.


[i] This is a great example of a non-prescriptive requirement (CIP-007 R3 and CIP-010 R4 are two others). It would be wonderful if all of the CIP v5 and v6 requirements were written this way, although I believe more than that is needed to fix CIP.

Saturday, February 11, 2017

Are You Attending RSA?


Are you attending the RSA Security Conference in San Francisco next week (i.e. the week of Feb. 13)? Deloitte will have a large booth (no. 427 in the South Expo Hall) to display some of the many services our 3,000 US-based cyber security consultants provide. We will also be presenting at some of the conference sessions. For more information on our presence at the show, go here.

I will be there all week, although I won’t be in the booth some of the time. If you would like to get together, drop me an email at talrich@deloitte.com. See you there!



The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte Advisory.

Friday, February 10, 2017

Taking Your Position(s)


In the fall of 2014, I wrote a series of posts entitled “Roll Your Own”.[i] I wrote them because I had been despairing earlier that year due to my fear that NERC would never come up with guidance on the many ambiguities, inconsistencies and missing definitions in CIP version 5. That guidance never came (or if it did come, it was withdrawn), but of course NERC entities still had to press on with their v5 compliance programs, since the compliance date was (at that time) April 1, 2016.

At the time, the NERC entities who were most impacted by this uncertainty were owners of generation stations, especially large plants subject to Criterion 2.1 (i.e. greater than 1500 MW). Those plants can have literally thousands of devices that might or might not be determined to be Cyber Assets, and potentially BES Cyber Assets. They couldn’t wait until sometime in 2015[ii] for guidance. Their biggest concern was the meaning of the word “Programmable” in the definition of Cyber Asset. This is nowhere defined by NERC, and it turns out that different ideas for what this word meant could cause huge differences in what would be in scope for CIP v5 in large plants.

The NERC CIP compliance manager for a generation entity with several of these large plants – who is a personal friend of mine, and is now retired – wrote in to tell me what he was doing about this problem. Since he couldn’t wait any longer to identify Cyber Assets in these plants (which is of course the first step in the CIP v5 compliance process, along with identifying Medium and High impact assets in scope under the criteria of Attachment 1 of CIP-002-5.1), he had simply “rolled his own”  definition[iii] of Programmable.

When I asked his justification for doing this, he asked what choice he had. If he waited for NERC to come up with their own definition[iv], he would probably miss the compliance deadline. What was important was that he documented a) the definition itself, b) his reasoning for arriving at that definition, and c) the fact that he had looked through whatever guidance was available from NERC and his region on his issue, and his reasons for following or not following that guidance.

Of course, this approach to ambiguity in CIP v5 – which was independently “discovered” by others in the industry – was ultimately adopted by many entities, for many different interpretation issues: the meaning of “affect the BES” in the BES Cyber Asset definition, the meaning of “Facilities” in some of the Attachment 1 criteria, the entire methodology  for identifying BES Cyber Systems, etc. For all of these issues and many more, the only good approach I know of[v] is to take the three steps listed above. As Lew Folkerth of RF, along with a CIP auditor with another region, confirmed  a couple of months after that post, it would be very difficult for a future auditor to argue that you had done something wrong in this process (e.g. used the “wrong” definition of Programmable), even if they don’t agree with the interpretation you made. As long as your definition is reasonable and well documented, it is very hard to understand how you could fall into any compliance trap with this approach.

However, what is critical is that your position be documented. If you haven’t done that, and a future auditor disagrees with how you interpreted an ambiguity or an undefined term, you won’t be able to show that you used good reasoning to arrive at your position, and that you had considered any other official guidance that might be available. The moral of this short history lesson is: Whenever you encounter an ambiguous definition or requirement or a missing definition, you need to “roll your own” interpretation or definition, following the three documentation steps listed above.

You may ask what relevance this has now, for entities with High and/or Medium impact assets. After all, the compliance date for those was last July 1. What good will it do you if you can’t show that you had developed these “position papers” before that date? And the answer to that is simple: Position papers are nowhere required in CIP v5 (or v6), so you aren’t out of compliance if you didn’t develop them by last July. When you do have an audit, you will be in a much better position regarding these issues if you have these papers available than if you don’t. So you should develop them now.

But suppose you didn’t necessarily use a particular position, even undocumented, as you came into compliance with v5? What if your substation people, your Control Center people, and your generation people all used slightly different definitions of Programmable, or worse didn’t use any particular definition at all, just relying on “gut feel”?

Even if this was the case, I don’t believe you’re out of compliance since – once again – CIP is either ambiguous or silent on this and a host of other issues. So it’s not too late to develop positions. What you also have to do, though, is rerun the processes – especially for BES Cyber System and ESP identification – that are dependent on the interpretation of these issues, once you have developed your position on them.

For example, if you decide one group of people (say, your relay group) used a definition of Programmable that is at odds with the one you have now documented in your position paper on that topic, you will need to go back through all the devices that they reviewed to determine if some of those should now be identified as meeting the definition of Cyber Asset (based on your new definition of Programmable). You will then have to document that you have done this (you will also have to document that the devices whose classification hasn’t changed were also determined to meet this new definition). And if you have just now developed a definition of Programmable that was never used by anybody in your organization before this, you will have to go through every electronic device to see if this definition applies to it; those that do meet the definition will be Cyber Assets. Of course, this is a lot of work, but it’s worth doing it now rather than a year from now when you’re ordered to because you’re found to be in massive violation of CIP-002 R1.

Of course, for any newly-identified Cyber Assets, you will next have to run them through the BES Cyber Asset definition to determine whether they meet that[vi]; if they do, you then have to incorporate them into one or more BES Cyber Systems and apply all the protections of all the CIP v5 and v6 requirements that apply to them.

But what if your organization did develop and document positions regarding every significant area of ambiguity or missing definitions already, and you can prove they were applied wherever applicable? Does that mean this whole discussion doesn’t impact you? No, it doesn’t. You still have to document positions whenever you are going beyond the current wording of the CIP v5 and v6 standards and definitions.

For example, suppose you have virtualization – even just a little – within any of your ESPs. You will need to document how you are protecting virtual assets (e.g. VMs within an ESP, an ESP implemented as a VLAN on a physical switch that incorporates non-ESP VLANs as well, a SAN that stores data both from inside and outside an ESP). Since CIP doesn’t say anything about these topics, you have to develop your own guidelines on how to protect these virtual assets. NERC tried to develop guidelines two years ago, and ended up withdrawing whatever they did develop. The issue of virtualization is now on the table for the CIP v7 Standards Drafting Team, but it will be at best a very long time before the CIP standards address virtualization. If you’re able to wait years, great. But a lot of companies feel the savings from virtualization are too large to wait any longer to implement it. If you are one of those, you need to start writing your positions.

And then there’s the cloud. CIP is silent on that, and I know of no written guidance from NERC or the regions[vii]. Yet I have recently come to realize that some large and medium-sized entities are using cloud-based asset management software for both ESP and non-ESP devices. I will write a post soon on this issue, but if you are one of these entities, I strongly suggest you document how the information from within your ESP that is being stored in the cloud is protected.

  
The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte Advisory.

[i] The first was this one. There were six more after that, which you can find by dropping down the month listings at the side of this page for September through December, 2014.

[ii] NERC’s self-imposed deadline for providing fairly comprehensive guidance (Lessons Learned and FAQ responses) kept getting pushed back, but I know that in January of 2015 the date of April 1, 2015 was stated at a WECC meeting. Of course, that date came and went without comprehensive guidance showing up. A few months after that date, NERC abandoned  the whole idea of providing comprehensive guidance and withdrew a number of the Lessons Learned. Some of these had actually clarified a few important issues, like how virtualization should be addressed and the meaning of “Programmable” in the Cyber Asset definition. But they all faced the same problem: Any “interpretation” of an ambiguous requirement or a missing definition requires either a Request for Interpretation or a SAR. And both of these take years to produce results.

[iii] You wouldn’t call what he did a definition, but rather a procedure for determining whether a device was programmable or not. I know some other entities also adopted that procedure.

[iv] NERC actually put out a Lesson Learned in early 2015 that provided what seemed to be a fairly reasonable definition of Programmable. But that was later withdrawn, along with all of the other Lessons Learned that tried to fill in gaps or clarify ambiguities in the CIP v5 requirements or definitions. NERC finally admitted that there was no way to fix these problems short of redrafting requirements or definitions, and they started the drafting process by writing a Standards Authorization Request and constituting a new CIP drafting team. That team is currently working on the items in their SAR; however, I am now very skeptical that they will ever be able to complete work on those items.

And as it turned out, the list of items in the SAR fell far short of the number that would be needed to address all of the very serious interpretation issues in CIP v5.  But the SAR would have been even more unachievable than it is, had all those items been included. I now believe that we’ve reached the limit for meaningful changes that can be accomplished given the current prescriptive format of NERC CIP. Only a non-prescriptive format will allow CIP to be updated in a timely manner, to address new threats and technologies.

[v] Short of asking your region for a definitive “ruling”, which you will never get, at least not in writing.

[vi] Of course, the BCA definition itself has at least one hole in it: the meaning of “adversely impact the BES”. You need to document how you determine whether the loss, misuse, etc. of a Cyber Asset will adversely impact the BES. Then you have to identify those Cyber Assets whose loss, misuse, etc. would have that adverse impact within 15 minutes.

[vii] Tobias Whitney of NERC said at the last CIPC meeting that NERC was going to come out with some sort of guidance document on the cloud. If I were you, I wouldn’t hold my breath waiting for that to happen. NERC simply can’t do this while still following the Rules of Procedure, unless they convene a new Standards Drafting Team to address the issue by modifying the CIP standards (i.e. CIP version 8). And just as with virtualization, the time required to do this – within the current prescriptive CIP framework - will be not much less than the lifetime of the observable universe, if that.

Sunday, January 29, 2017

“Compliance Paperwork”


I have been saying for a year that the NERC CIP standards, in their current prescriptive format, are unsustainable. Until my last post my number one reason for saying this was that a large portion – perhaps even half - of the effort that NERC entities have to expend in order to comply with CIP goes toward activities that have no security benefit.[i] In my opinion, instituting a non-prescriptive, threat-based approach to CIP would be one way to increase the portion of CIP spending going to security, without requiring a net increase in spending to achieve this result.

In saying this, I always referred to “compliance paperwork” as by far the largest (but not the only) component of this “non-productive” effort. In other words, my proposed solution to CIP’s unsustainability problem would result in a large reduction in paperwork, although it wouldn’t eliminate it, since some compliance paperwork would still be required.

However, the problem with this argument was that I had to admit there is no good way to tell, simply by looking at a particular paperwork activity, whether it is “good” paperwork – which contributes to security and thus would be retained under my proposal – or “bad” paperwork, which doesn’t contribute to security at all. Given this, an entity would have no objective criterion for determining how much of their CIP compliance effort contributes to security; they would just have to take a guess, based on their experience. So I was basing my argument on something that might be called an “inherently unverifiable” fact: This is a fact that can never be proven true or false.

In my last post, I demoted this reason for CIP’s unsustainability from Number One to Number Two. You can read about my new Number One reason in the post already cited, but in short the reason is that the prescriptive CIP requirements force entities to allocate their cyber security spending (both spending of dollars and “spending” of employee time) to activities that provide less security benefit – and often much less – than activities they would otherwise prioritize. In demoting the previous Number One reason to Number Two (but still saying it was a valid reason), I was in effect saying that, even if an entity’s priorities for cyber security would – if CIP were suddenly made non-mandatory - align exactly with the activities mandated by CIP v5 and v6 (of course the chance of this happening is zero), they would still be wasting a lot of effort on activities that had no effect at all on security.

Last week, I spoke in front of the CIP users’ group for one of the NERC Regional Entities about the problems with CIP and my tentative proposal to fix them.[ii] There were a lot of really good questions, and we had a great discussion, in which I probably learned a lot more than my audience did.[iii]

During this discussion, someone expressed skepticism that any CIP compliance paperwork has zero security value; after all, documenting what you do is a good practice – and often required for internal audit purposes – in any activity related to computer systems and networks. I at first replied with my standard answer described above, that there is no way that, simply by looking at a paperwork task, an outside observer could determine that it did or didn’t contribute to security; only longtime compliance or cyber security staff members at the entity itself could make this determination – and that would only be based on gut feel. So this determination will always be inherently unverifiable.

But as soon as I said this, I felt quite uneasy. This was perhaps because, during the week I made this presentation, there was a raging debate in the national press about whether the idea of “alternative facts” was a valid one, or just another way of saying “lies”. And here I was going one step further by asserting that certain facts were true but just could never be verified. If the person who invented the phrase “alternative facts” had instead asserted my concept of “inherently unverifiable” facts, she might not have received all the flak that she encountered – if anything, the members of the press would have started looking through the literature on epistemology to see if “inherently unverifiable facts” might be a valid concept (i.e., can there be a fact that could never be verified? It’s an interesting question. It actually is a big debate in physics today, where proponents of string theory, and also the idea that there are an infinite number of universes, readily admit that these ideas can never be definitively proven true or false).

I was really not comfortable continuing to assert that there is no way to identify paperwork that is required for compliance but doesn’t contribute at all to security. But then I realized there is no reason to continue to make this assertion, since the result is virtually the same - whether these activities don’t contribute at all to security or whether they do contribute but only minimally. The result in both cases will be that a lot of the paperwork required by CIP contributes very little to security. So let me stipulate from here on out that every activity required by CIP contributes in some way to security, although often in a very small way.

Once I admitted that, I realized my Number Two reason why CIP is unsustainable had now gone away and been subsumed into Reason Number One, without requiring that I change how I articulate that reason at all. As I said above (and in my last post), the Number One problem with the CIP requirements is that they cause entities to use their limited cyber security budgets to carry out security mitigation activities that would otherwise have a very low priority – if the entity were free to do what it thought was best.[iv] Since no NERC entity – at least none that I know of – has an unlimited cyber security budget, this results in the most important cyber threats (based on the current threat landscape in North America,[v]) going either unmitigated or inadequately mitigated.

To summarize this post, I no longer believe that there are activities – which I’ve previously called “pure compliance paperwork” - that are required by the CIP standards but contribute nothing to cyber security. Every activity required by CIP contributes in some way to security, but a lot of these activities make a very small contribution. I am making a proposal that would rewrite CIP to require that NERC entities prioritize the activities that contribute the most to BES[vi] cyber security, without prescriptively saying that certain activities are required, no matter how little they advance the goal of securing the bulk electric system.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte Advisory.

[i] I based this statement on informal discussions I’ve had with various NERC entities, not on any sort of formal poll.

[ii] I prefaced my remarks by pointing out that I am working, with two co-authors, on a book that will lay out this proposal, among other things. We expect to have it out later this year.

[iii] I will probably have another post inspired by this discussion soon.

[iv] You may cringe when you hear me say that the CIP standards shouldn’t unnaturally constrain NERC entities from allocating their limited cyber security budgets as they “think best”. You may point out that a) a lot of, or even most, organizations still believe that what is best as far as cyber security goes is to spend as little on it as possible; and b) even if an entity realizes it must spend a substantial amount on cyber, it won’t necessarily spend it in an optimal way, due perhaps to a lack of understanding of cyber security principles and practices.

Both of these objections can be answered by pointing out that my “proposal” for rewriting CIP will require the entity (or a third party) to assess its security posture with respect to various security domains (software vulnerability mitigation, anti-phishing, change control, etc.) and develop a plan for mitigating the most important deficiencies identified. This plan will have to be reviewed by a competent outside party, which might be a consulting firm or the entity’s NERC Region; this process is similar to the one now mandated by CIP-014.  I am currently leaning toward the idea that the Regions themselves should do this review. I realize they don’t currently have the manpower to review all of these plans. That will hopefully change, but even then the Regions will probably still have to hire outside resources, at least to address temporary overloads. But since otherwise the entities would have to engage their own consultants for this task, and there would be the potential for some consulting firms to go easier on the entity in exchange for being engaged to do the not insubstantial job of implementing the mitigation plan (in fact, this is the biggest problem I see with the PCI standards for payment card security, since the PCI standards are audited by assessors paid by the retailer being audited, who are then allowed to be engaged to mitigate the problems that they identify. They have lots of incentive to downplay the problems in the official report, since they know it will make the retailer look good), I still think it’s better for the Region to do it. While having the Regions do it will probably require an increase in the assessments paid by each entity, the entities will hopefully see that this simply replaces an amount they would otherwise have to spend themselves.

Having the Region review an entity’s assessment and mitigation plan will address both of the objections shown above. If the entity happens to think that their cyber security posture is just great and there’s no need to spend much more money on cyber, or if the entity’s mitigation plan will spend too much on unimportant tasks and too little on important ones, the Region will be able to order the entity to revise all or parts of their plan. And they will be regularly audited (perhaps even once a year) on how well they are carrying out that plan.

[v] My proposal for rewriting CIP – and specifically the one I and my co-authors will outline in our upcoming book – will require that the team that drafts the new standards identify the primary cyber threats to the North American bulk electric system. The entity will be required to address each of those threats in some way, either to mitigate deficiencies in their defenses that are identified in an assessment, or to document why a particular threat doesn’t apply to it. However, since the threat landscape changes very rapidly (e.g., phishing came out of nowhere about five years ago to become probably the most serious cyber threat today, and the origin of more than half of successful cyber attacks in recent years), there needs to be some way of continually updating this threat list. I am proposing that there be a NERC committee which meets at least quarterly to a) assess new threats and determine whether or not they should be added to the list; b) determine whether any threats currently on the list should be removed; and c) write and update guidance on best practices for mitigating these threats.

In addition, since some threats only apply to particular types of entities or particular regions of the country, there will always be threats that an entity faces, that aren’t included in the “NERC-wide” list just described. It will be up to the entity to make sure these particular threats are also addressed, and it will be up to the NERC Region to verify that the entity’s mitigation plan adequately addresses these threats.

[vi] Note that, in my proposal, the CIP standards will still be focused entirely on BES security. Every NERC entity has other cyber security goals: protecting financial data, protecting customer information, etc. These also need to be addressed, but CIP has no bearing on these. In other words, under my proposal the entity will need two cyber security budgets: the budget to address BES threats and the budget to address all other cyber threats.  

Thursday, January 19, 2017

The Biggest Problem with NERC CIP


Since at least last February, I have been pushing the idea that the CIP standards are on an unsustainable course and need to be radically rewritten. I have mentioned several reasons for this, but I have said the most important is cost, since the amount that NERC entities are spending on CIP compliance grew significantly with the advent of CIP v5 and v6. Plus it will inevitably increase even more significantly due to the new Supply Chain standard (CIP-013), to CIP v7 and to other scope increases still likely to come, which include virtualization, the cloud, machine-to-machine communications and who knows what else.

However, just the fact that NERC entities are spending a lot of money on CIP – with a lot more to come – isn’t in itself the issue; after all, the whole idea of CIP was to get entities to invest in cyber security to an extent they couldn’t be counted on to do without facing a mandatory requirement. The problem – as I’ve discussed in several posts, including the one just referenced - is that so much of this spending (including “spending” on staff time, of course) isn’t going to cyber security but to what I’ve been calling “compliance paperwork”. The estimates I have received of the portion being so spent seem to center on 50%, meaning literally half of what entities are spending on CIP compliance nowadays is not contributing to cyber security. Even if the actual figure is less than that, it is still a big number when you consider that total spending (including employee time invested) on CIP v5/v6 compliance has been well into the billions of dollars.[i]

I haven’t ever really discussed what I mean by “compliance paperwork”. For one, it definitely includes the documentation that an entity has to do to prove they did things for compliance purposes – as opposed to the documentation that entities would do in the absence of a mandatory requirement, simply because it is a good business and security practice. It also includes a lot of the self-reports, mitigation plans, compliance certifications, etc. that are required to comply with CIP. And it certainly includes a lot of the effort that is required to prepare for and get through an audit.

The idea that so much of current CIP spending isn’t going to security leads directly to the question “Could a better design of the standards reduce the amount of spending going solely to compliance ‘paperwork’ – meaning the same amount of total spending on CIP would lead to a much larger investment in actual cyber security?” My answer to this has been an emphatic yes, assuming the standards were made much less prescriptive. In other words, I have been saying it is possible to write much less-prescriptive CIP standards whose implementation will lead to a much greater amount of investment in cyber security, even if the overall level of CIP spending were unchanged.

However, a recent conversation made me realize that, while this is a very important reason to rewrite CIP, there is a more important one. To understand this new reason (which I have probably alluded to previously but not explicitly called out, mainly because I had been thinking it was really a part of the “compliance paperwork” reason just discussed), I need you to do a “thought experiment[ii]” with me: Say that you annually have available to you an amount CYBER, which is your organization’s current expenditures on cybersecurity. In addition, you have another amount CIP, which is your expenditures on NERC CIP compliance. Now suppose the NERC CIP standards suddenly go away, but the total amount available to you to spend remains the same, as long as you spend it all on cyber security[iii]. In other words, you can now spend the entire amount you would have spent on CIP on cyber security, in addition to the amount you were planning to spend on (non-CIP) cyber anyway; let’s call this total amount SUM(CIP+CYBER). How would you allocate this total sum? I’m guessing you would do it the way hopefully most organizations do for all of their security spending: You would first identify the threats that affect your organization and estimate the costs to mitigate each one (which of course isn’t the same as eliminating those threats. That’s impossible, at least for most threats). Then you would rank these threats by their potential impact. Finally, you would budget the available funds so that they mitigated the most important threats.[iv]

At this point, you have a list that reads “Threat A has the largest potential impact and it would cost $Y to mitigate; we’ll definitely address that, since this is well below our total budget. Threat B has the second largest impact, and it would cost $Z to mitigate. $Y + $Z is still below our total budget, so we will do both of these…..” You would continue this process until you have exhausted your budget for the year (while still leaving some for contingencies, of course!). Of course, your total budget for cyber security is SUM(CIP+CYBER), since there are no longer any CIP requirements.

Now let’s say NERC CIP suddenly reappears (Don’t ask me why. I told you this was just a thought experiment!), and you have to comply with the current CIP standards, as well as address cyber security threats. What happens to your calculations? All of a sudden, you have a bunch of new compliance threats that need to be mitigated: the threat of non-compliance with CIP-004 R3, non-compliance with CIP-007 R2, etc. But your original cyber threats are still there and also need to be mitigated. How do you now prioritize your spending (and remember, you still have SUM(CIP+CYBER) available to mitigate both CIP and non-CIP cyber threats)? Can you look at a CIP compliance threat, like say the risk of non-compliance with CIP-010 R1, and rank that ahead of say the threat posed by phishing attacks – a cyber risk that isn’t addressed at all in CIP?

Here’s the correct answer: You certainly can do that. In fact, I believe most NERC entities would almost never prioritize any non-CIP cyber threat over the threat of non-compliance with a CIP requirement. Not only can there be huge fines for ignoring CIP requirements, but most NERC entities will do almost anything to avoid being fined at all because of damage to reputation, impact on rate cases, etc. In fact, I think there are few NERC entities that would passively accept a CIP fine, even if the “penalty” were a trip to Disney World. This means that the threats of non-compliance with CIP requirements will assume a special priority ahead of probably all the other cyber threats. In essence, your organization will need to make sure it has addressed all of its CIP non-compliance threats before it spends very much to address non-CIP cyber threats.[v]

So why is this bad? As I’ve just said, some significant percentage of what your organization spends on CIP compliance doesn’t go to security at all but simply to proving compliance. This alone results in the amount you are spending on CIP being less effective in advancing cyber security than would be the same amount of spending in the absence of CIP. So the fact that you have to prioritize CIP spending means that a lot of your total spending - that is, some percentage of the quantity SUM(CIP+CYBER) – goes to compliance paperwork, rather than cyber security. This might not be 50 percent, but it is certainly significant.

I used to think this was the biggest problem with CIP, but I no longer do. To see the biggest problem, let’s go back to our thought experiment and see what happened to your spending priorities when CIP was re-imposed. Before that happened (during the period of time when CIP went away but you still had the money that had been allocated for CIP, along with the non-CIP cyber budget), you were allocating your entire cyber security budget based strictly on the impact level of each threat: the threats with the largest impact were first in line to receive mitigation money, those with the next-largest impact were next in line, etc.

When CIP was re-imposed, you had to change that. You essentially “prioritized” the threat of non-compliance with each CIP requirement to put these compliance threats ahead of the “pure” cyber threats. For example, your number one cyber threat (say, phishing, which was recently said to be the immediate cause of 91% of cyber attacks) might get pushed from number one down to number 44 (or so) on the list of threats to be addressed – because the threats of non-compliance with each of the CIP requirements all took their place ahead of it. The bottom line: you will be able to address far fewer non-CIP cyber threats than you could have before CIP was re-imposed.

A Rude Interruption
Here you might rudely interrupt me and say “Even though we’ve had to prioritize CIP expenditures over other cyber expenditures, CIP is just cyber anyway. This means we’re still devoting our entire budget (which equals SUM(CIP+CYBER), of course) to cyber security. If we put aside for the moment the fact that a certain percentage of CIP compliance spending is “wasted” on compliance paperwork,[vi] there is really no impact due to the fact that CIP was re-imposed; we’re still spending 100% of the budget on cyber security.”

What’s wrong with this statement? It ignores the fact that what you are spending your money on has drastically changed, in two ways. The first is that the threats you prioritized during the period when there was no CIP have now been pushed way down the list of threats to be addressed, since the threats of non-compliance with the different CIP requirements have all been pushed ahead of them. In fact, probably a lot of the non-CIP cyber security threats you would like to mitigate will now receive no funding. For example, even though phishing was your number one threat to address before CIP was re-imposed, it is now your number 44 threat and you’re going to have to fight hard to get even a modest amount of money allocated to it. In other words, even though the total amount you’re spending hasn’t gone down, it no longer is based on your prioritization of the threats. While this effect can’t be quantified, it is certainly quite significant – it acts something like a tax on your cyber security program.

Let's move to the second way in which what you’re spending money on has changed with the re-imposition of CIP; the best way to describe this is with an example. I think everyone agrees that the threat posed by security vulnerabilities in software should rank among the most important cyber threats; this means that patch management should form a big part of every entity’s cyber security program. Of course, CIP-007 R2 is the patch management requirement in CIP v5 and v6. In our thought experiment, when CIP is reimposed the threat of non-compliance with CI-007 R2 (and all of the other CIP requirements) will be prioritized ahead of all the non-CIP cyber threats like phishing.

But wait! You point out that one of the top cyber threats on your list, during the time when CIP went away, was software vulnerabilities. Patch management is probably the most important way to mitigate this threat; it also happens to be what is required by CIP-007 R2. Thus, it doesn’t matter if this threat gets pushed back by the CIP compliance threats; your organization will still have a patch management program since it has to comply with CIP-007 R2.

This is true, but look at what CIP-007 R2 requires you to do: First, every 35 days you need to check with your patch source for every piece of software installed in your ESP to determine whether there is a new security patch available; you have to do this regardless of whether the vendor is likely to issue a security patch, or even whether they have ever issued such a patch. Then you have to determine whether each patch is applicable. Next, for every applicable patch, you have 35 days to either install it (on every ESP device to which it applies) or develop a mitigation plan to address the vulnerability in a different way, pending installation of the patch. Finally, you have to track each mitigation plan and, if it isn’t completed within the timeframe in the plan, get approval of the CIP Senior Manager or delegate for a revision or extension. And, of course, you have to document whatever you did at every step of the way. This means documenting that all of this was done for all the software installed on every device within the ESP.

In practice, this one requirement has proved to be a huge source of potential violations of CIP, especially for large organizations with hundreds or thousands of devices in their ESPs. Unless they have a wonderful system in place to automate a lot of this (and not all of this process can be automated, especially when it comes to “one-off” software packages installed on just a few systems within the ESP), there will almost inevitably be a lot of patches that don’t get identified, installed or mitigated on one or more systems. I know that some organizations are devoting a huge amount of resources to trying to reduce instances of non-compliance with CIP-007 R2 to as low a number as possible, even though they realize they will never eliminate them.

Contrast this with the patch management program your organization would put in place if you didn’t have to comply with CIP-007 R2. You will obviously want to check regularly to determine whether new security patches are available. But do you need to do it every 35 days for every piece of software installed in your ESP, regardless of whether or not the vendor is likely to have a new patch available – or indeed, regardless of whether they have ever released a security patch? And if you only install patches quarterly for some lower-importance software, will the world come to an end?

The point of this example is that, even though it is almost indisputable that you will want to have a patch management program for devices in your ESPs, it is not at all indisputable that you will want to invest the same amount of resources in that program as are required to comply with CIP-007 R2. The difference between the cost of the program you would follow if you didn’t have to comply and the program you are in fact following (or should be following…ahem!) to comply with CIP-007 R2, constitutes another way in which having mandatory, prescriptive CIP standards imposes a “tax” on your cyber security program, making it less effective than it would be in the absence of the current CIP standards. Of course, it would be a huge, very imprecise, exercise to try to determine what that difference would be, for a particular organization. But I think it would be fairly substantial for most organizations.

However, please note that I’m not saying the additional amount you’re spending due to having to comply with CIP-007 R2 – over what you would spend in the absence of this requirement - is “wasted”. It is clearly better for your organization’s cyber security if you are tracking new patches every 35 days and if you are applying all patches (or otherwise mitigating the vulnerabilities) within another 35 days, than if you are accomplishing each of these steps in say three months.

But here’s the point: You could say the same thing about almost anything that can be done in cyber security. If you take more measures and you do them more frequently and for more devices, there will always be some improvement in security. But where does this stop? Instead of 35 days, maybe you should look for new patches every hour, for each of the software packages installed on each device in your ESP. And maybe this requirement should be extended to the IT network as well since – as the original Ukraine attacks showed – a single compromise in the IT network can ultimately lead to the OT network being “owned” by the attackers. This would undoubtedly lead to even better security than does the current wording of CIP-007 R2. Why don’t we change the requirement to read that way?

Of course, if you had unlimited funds to spend on cyber security, you wouldn’t care how frequently you checked for new security patches, or for how many devices. In fact, you would spend money on anything that could lead to even a modest improvement in your security posture. In this case, it doesn’t matter whether CIP prescribes three months, 35 days or one hour as the interval for checking patch availability. With unlimited funds at your disposal, the patch management program you would have in the absence of the current CIP requirements would be no different (or less resource-intensive) than your program to comply with CIP-007 R2. For you, CIP doesn’t act as a “tax” on your cyber program.[vii]

Unfortunately, I don’t know any NERC entity with an unlimited budget for CIP compliance, cyber security, or anything else. In the real world, the second way in which the current (prescriptive) CIP standards impose a “tax” on your cyber security spending, preventing your cyber expenditures from having the positive impact they would have in the absence of the current CIP standards, is the fact that you are required to take steps for patch management that you wouldn’t take were CIP-007 R2 not a mandatory requirement. To put it differently, if you didn’t have to comply with CIP-007 R2, you could still include patch management in your cyber security program, but you wouldn’t have to do it in as expensive a fashion as is required by CIP. This would free up money that could be spent on addressing other areas in which the threats posed are more serious. This same reasoning applies to the other prescriptive CIP requirements, like CIP-010 R1.

Here’s the moral of our story: In the absence of unlimited budgets, the existence of mandatory prescriptive CIP requirements inevitably means that cyber security programs will be less effective than in their absence. This is because a) the entity doesn’t have the funds to address the threats they feel are most important, since they have no choice but to first address CIP compliance, and b) the steps the entity has to take to address threats like software vulnerabilities require a lot of resources that the entity would prefer to spend on higher-priority cyber threats – were it not for the CIP standards.

Can we quantify this “tax” imposed by prescriptive CIP requirements? I don’t think so, either on the level of individual NERC entities or of all of them. But it has to be considerable. And I believe it is much greater than what I used to consider the Number One problem with CIP: the fact that some large percentage of the amount spent goes to “compliance paperwork”, not cyber security. By severely distorting the way an entity would allocate cyber security funds in the absence of prescriptive CIP requirements, this “tax” constitutes the primary problem with the current CIP standards.


The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte Advisory.


[i] Please keep in mind that any particular CIP compliance activity – say, developing a mitigation plan when a patch can’t be applied, per CIP-007-6 R2 – will always be a mixture of what I call “pure compliance paperwork” and activities that actually do contribute to cyber security, and it will be almost impossible just to look at a description of what was done and say how much of that went to security and how much to pure compliance. So any estimate of the amount being spent on pure compliance will always be very subjective, whether on the level of an individual entity or for the aggregate of NERC entities.

[ii] In doing this, I’m in good company. Einstein was a great user of thought experiments to prove the validity of his theories. In fact, his greatest achievement, the theory of General Relativity, was almost instantly accepted as soon as it was published, even though the first experimental proof only came a few years later, when the British navy measured a slight deviation in a star’s apparent position next to the sun during a total solar eclipse. Einstein supported his theory with some compelling thought experiments; that was all that was required to convince the scientific community of its validity.

[iii] I know some of you are going to jump out of your seats and tell me this is a stupid assumption, since if there were no NERC CIP, you would never be given the same amount of money to spend on cyber security. Where I’m going with this argument is the idea that re-writing the CIP standards in a non-prescriptive format would remove the problems discussed in this post, but it would mandate that NERC entities make a significant investment in cyber security.

[iv] This is a simplification, of course. I think most organizations will want to make sure they have “adequately” addressed each major security domain: patch management, vulnerability assessments, access management, etc. They will do this regardless of the importance of the particular threats affecting that domain. Only after they have done this will they look at the currently identified threats and spend their remaining funds mitigating the most important of those threats. But there should always be some prioritization of expenditures based on threat rankings.

[v] Once again, this is a big simplification. For one thing, it ignores the fact that most organizations address CIP compliance threats from a separate budget than the one for non-CIP cyber threats, so in theory there is never really a conflict between the two. But I contend that at some level – perhaps just the CEO or the Board – someone has to be balancing all of the spending numbers. And they will almost certainly be making trade offs like “Gee, we’ve had to spend a lot more on CIP this year. Since CIP is for cyber security, this means we shouldn’t have to spend so much in our separate cyber budget.” Conversely, if there were no CIP, this person would almost certainly be willing to spend more on cyber security.

[vi] It may seem odd that I’m now assuming away a point I made at great effort earlier in this post. When you do thought experiments, you can make all kinds of strange assumptions. I’m doing this because I want to show that there is a negative consequence of CIP that is even greater than the fact that a large amount of spending goes to compliance paperwork.

[vii] I want to point out now that this example would be quite different if we substituted the anti-malware requirement, CIP-007 R3, for CIP-007 R2. R3 is one of only a few non-prescriptive requirements in CIP v5/v6. It states that entities must “Deploy method(s) to deter, detect, or prevent malicious code.” It doesn’t say you have to use antivirus vs. another method like application whitelisting. It doesn’t say you have to take particular steps to implement or maintain whatever solution(s) you choose. It is up to the entity to decide what is the best way to address the threat posed by malicious code, given their particular environment and the fact that they have a lot of other cyber threats they need to address, not just this one – while they don’t have unlimited funds at their disposal.