I have been saying for a year that the NERC CIP standards, in their current prescriptive format, are unsustainable. Until my last post my number one reason for saying this was that a large portion – perhaps even half - of the effort that NERC entities have to expend in order to comply with CIP goes toward activities that have no security benefit.[i] In my opinion, instituting a non-prescriptive, threat-based approach to CIP would be one way to increase the portion of CIP spending going to security, without requiring a net increase in spending to achieve this result.
In saying this, I always referred to “compliance paperwork” as by far the largest (but not the only) component of this “non-productive” effort. In other words, my proposed solution to CIP’s unsustainability problem would result in a large reduction in paperwork, although it wouldn’t eliminate it, since some compliance paperwork would still be required.
However, the problem with this argument was that I had to admit there is no good way to tell, simply by looking at a particular paperwork activity, whether it is “good” paperwork – which contributes to security and thus would be retained under my proposal – or “bad” paperwork, which doesn’t contribute to security at all. Given this, an entity would have no objective criterion for determining how much of their CIP compliance effort contributes to security; they would just have to take a guess, based on their experience. So I was basing my argument on something that might be called an “inherently unverifiable” fact: This is a fact that can never be proven true or false.
In my last post, I demoted this reason for CIP’s unsustainability from Number One to Number Two. You can read about my new Number One reason in the post already cited, but in short the reason is that the prescriptive CIP requirements force entities to allocate their cyber security spending (both spending of dollars and “spending” of employee time) to activities that provide less security benefit – and often much less – than activities they would otherwise prioritize. In demoting the previous Number One reason to Number Two (but still saying it was a valid reason), I was in effect saying that, even if an entity’s priorities for cyber security would – if CIP were suddenly made non-mandatory - align exactly with the activities mandated by CIP v5 and v6 (of course the chance of this happening is zero), they would still be wasting a lot of effort on activities that had no effect at all on security.
Last week, I spoke in front of the CIP users’ group for one of the NERC Regional Entities about the problems with CIP and my tentative proposal to fix them.[ii] There were a lot of really good questions, and we had a great discussion, in which I probably learned a lot more than my audience did.[iii]
During this discussion, someone expressed skepticism that any CIP compliance paperwork has zero security value; after all, documenting what you do is a good practice – and often required for internal audit purposes – in any activity related to computer systems and networks. I at first replied with my standard answer described above, that there is no way that, simply by looking at a paperwork task, an outside observer could determine that it did or didn’t contribute to security; only longtime compliance or cyber security staff members at the entity itself could make this determination – and that would only be based on gut feel. So this determination will always be inherently unverifiable.
But as soon as I said this, I felt quite uneasy. This was perhaps because, during the week I made this presentation, there was a raging debate in the national press about whether the idea of “alternative facts” was a valid one, or just another way of saying “lies”. And here I was going one step further by asserting that certain facts were true but just could never be verified. If the person who invented the phrase “alternative facts” had instead asserted my concept of “inherently unverifiable” facts, she might not have received all the flak that she encountered – if anything, the members of the press would have started looking through the literature on epistemology to see if “inherently unverifiable facts” might be a valid concept (i.e., can there be a fact that could never be verified? It’s an interesting question. It actually is a big debate in physics today, where proponents of string theory, and also the idea that there are an infinite number of universes, readily admit that these ideas can never be definitively proven true or false).
I was really not comfortable continuing to assert that there is no way to identify paperwork that is required for compliance but doesn’t contribute at all to security. But then I realized there is no reason to continue to make this assertion, since the result is virtually the same - whether these activities don’t contribute at all to security or whether they do contribute but only minimally. The result in both cases will be that a lot of the paperwork required by CIP contributes very little to security. So let me stipulate from here on out that every activity required by CIP contributes in some way to security, although often in a very small way.
Once I admitted that, I realized my Number Two reason why CIP is unsustainable had now gone away and been subsumed into Reason Number One, without requiring that I change how I articulate that reason at all. As I said above (and in my last post), the Number One problem with the CIP requirements is that they cause entities to use their limited cyber security budgets to carry out security mitigation activities that would otherwise have a very low priority – if the entity were free to do what it thought was best.[iv] Since no NERC entity – at least none that I know of – has an unlimited cyber security budget, this results in the most important cyber threats (based on the current threat landscape in North America,[v]) going either unmitigated or inadequately mitigated.
To summarize this post, I no longer believe that there are activities – which I’ve previously called “pure compliance paperwork” - that are required by the CIP standards but contribute nothing to cyber security. Every activity required by CIP contributes in some way to security, but a lot of these activities make a very small contribution. I am making a proposal that would rewrite CIP to require that NERC entities prioritize the activities that contribute the most to BES[vi] cyber security, without prescriptively saying that certain activities are required, no matter how little they advance the goal of securing the bulk electric system.
The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte Advisory.
[i] I based this statement on informal discussions I’ve had with various NERC entities, not on any sort of formal poll.
[ii] I prefaced my remarks by pointing out that I am working, with two co-authors, on a book that will lay out this proposal, among other things. We expect to have it out later this year.
[iii] I will probably have another post inspired by this discussion soon.
[iv] You may cringe when you hear me say that the CIP standards shouldn’t unnaturally constrain NERC entities from allocating their limited cyber security budgets as they “think best”. You may point out that a) a lot of, or even most, organizations still believe that what is best as far as cyber security goes is to spend as little on it as possible; and b) even if an entity realizes it must spend a substantial amount on cyber, it won’t necessarily spend it in an optimal way, due perhaps to a lack of understanding of cyber security principles and practices.
Both of these objections can be answered by pointing out that my “proposal” for rewriting CIP will require the entity (or a third party) to assess its security posture with respect to various security domains (software vulnerability mitigation, anti-phishing, change control, etc.) and develop a plan for mitigating the most important deficiencies identified. This plan will have to be reviewed by a competent outside party, which might be a consulting firm or the entity’s NERC Region; this process is similar to the one now mandated by CIP-014. I am currently leaning toward the idea that the Regions themselves should do this review. I realize they don’t currently have the manpower to review all of these plans. That will hopefully change, but even then the Regions will probably still have to hire outside resources, at least to address temporary overloads. But since otherwise the entities would have to engage their own consultants for this task, and there would be the potential for some consulting firms to go easier on the entity in exchange for being engaged to do the not insubstantial job of implementing the mitigation plan (in fact, this is the biggest problem I see with the PCI standards for payment card security, since the PCI standards are audited by assessors paid by the retailer being audited, who are then allowed to be engaged to mitigate the problems that they identify. They have lots of incentive to downplay the problems in the official report, since they know it will make the retailer look good), I still think it’s better for the Region to do it. While having the Regions do it will probably require an increase in the assessments paid by each entity, the entities will hopefully see that this simply replaces an amount they would otherwise have to spend themselves.
Having the Region review an entity’s assessment and mitigation plan will address both of the objections shown above. If the entity happens to think that their cyber security posture is just great and there’s no need to spend much more money on cyber, or if the entity’s mitigation plan will spend too much on unimportant tasks and too little on important ones, the Region will be able to order the entity to revise all or parts of their plan. And they will be regularly audited (perhaps even once a year) on how well they are carrying out that plan.
[v] My proposal for rewriting CIP – and specifically the one I and my co-authors will outline in our upcoming book – will require that the team that drafts the new standards identify the primary cyber threats to the North American bulk electric system. The entity will be required to address each of those threats in some way, either to mitigate deficiencies in their defenses that are identified in an assessment, or to document why a particular threat doesn’t apply to it. However, since the threat landscape changes very rapidly (e.g., phishing came out of nowhere about five years ago to become probably the most serious cyber threat today, and the origin of more than half of successful cyber attacks in recent years), there needs to be some way of continually updating this threat list. I am proposing that there be a NERC committee which meets at least quarterly to a) assess new threats and determine whether or not they should be added to the list; b) determine whether any threats currently on the list should be removed; and c) write and update guidance on best practices for mitigating these threats.
In addition, since some threats only apply to particular types of entities or particular regions of the country, there will always be threats that an entity faces, that aren’t included in the “NERC-wide” list just described. It will be up to the entity to make sure these particular threats are also addressed, and it will be up to the NERC Region to verify that the entity’s mitigation plan adequately addresses these threats.
[vi] Note that, in my proposal, the CIP standards will still be focused entirely on BES security. Every NERC entity has other cyber security goals: protecting financial data, protecting customer information, etc. These also need to be addressed, but CIP has no bearing on these. In other words, under my proposal the entity will need two cyber security budgets: the budget to address BES threats and the budget to address all other cyber threats.