Wednesday, March 29, 2017

The News from WECC, Part I: When a Patch isn’t called a Patch

I’m attending the semi-annual WECC CIPUG (CIP Users Group) meeting in Denver this week – just me and 350 of my closest friends (and attendance is down from some other CIPUG meetings, which have at least one reached 500 attendees). As has been the case previously at these meetings, I have come away with some interesting observations, so consider yourself warned. This is part I of what may be several posts.

In today’s meeting, Eric Weston, one of the WECC CIP auditors, did a good presentation about various pitfalls in CIP-007. I agreed with everything he said, except what he said about CIP-007 R2, Patch Management. The point of this part of his discussion (and you can find his slides, along with all of the other presentations, by going to this page and looking for the slides with his name on them) was that there can be some security patches that aren’t actually called that – they’re referred to as firmware upgrades, driver upgrades, etc.

OK, so far so good – I don’t have any problem with that.[i] However, what I was confused about was that he quoted from the CIP-007 Guidelines and Technical Basis (slide 6) where it says “The intent (my italics) of Requirement R2 is to require entities to know, track, and mitigate the known software vulnerabilities associated with their BES Cyber Assets.” Then he followed that up on slide 8 with the statement that

“Entities should include in their patch management program
• Hardware Drivers
• Device Firmware (that can be updated by end user)
• OS updates for devices that provide revisions to the OS to address vulnerabilities (Cisco IOS, Ruggedcom ROS, etc.)”

This set off a red flag in me for the following reason: CIP-007 R2 itself doesn’t state an objective (or intent, if you will) to be attained, even though the guidance (which isn’t part of the requirement) does. And there is a good reason why R2 doesn’t state an objective: It is a prescriptive requirement. In fact, as the two of you who have been reading my posts closely for the past few months probably realize, CIP-007 R2 is my poster child for a prescriptive requirement (you should really hiss at this point, since the villain has just made his appearance onstage).

Prescriptive requirements don’t state an objective and let you figure out how to attain it, like non-prescriptive (or objective-based) requirements do. Instead, a prescriptive requirement sets out a specified set of steps that must be taken by all NERC entities that are subject to the requirement. While the steps are obviously meant to lead to attaining a particular objective (in this case, mitigating known software vulnerabilities), compliance with the requirement doesn’t entail attaining the objective. Rather, to comply with the requirement, the entity needs to follow the steps, period. So to speak truthfully, the objective of a prescriptive requirement is just to follow the steps listed in the requirement. If you haven’t done that, you haven’t complied with the requirement, even though you may have obtained the final objective through some other means.

To illustrate what I just said, suppose you had decided that you could mitigate software vulnerabilities in your BCS, PACS, etc. simply by scanning them every month and letting the scanner tell you what vulnerabilities it found; then you would find the patches that addressed those vulnerabilities. I'll stipulate for now that this is as good a way - or perhaps even better - to attain the objective of software vulnerability mitigation as the set of steps prescribed in CIP-007 R2. But don't try to tell this to your auditor. You haven't complied with R2 because the objective of R2 is simply to follow the steps listed in the different requirement parts, nothing more, nothing less.

Contrast this with CIP-007 R3, Malicious Code Prevention. This is a truly non-prescriptive, objective-based requirement. Part 3.1 simply reads “Deploy method(s) to deter, detect, or prevent malicious code.” This is a true objective, and it clearly leaves the choice of methods up to the entity. Part 3.2 reads “Mitigate the threat of detected malicious code.” Again, this is the objective; the methods are up to you.

However, the main reason a red flag went up in me is that I was worried Eric was implying that, because the “objective” of R2 was software vulnerability mitigation, and because there are indisputably some software updates released by vendors that mitigate vulnerabilities but aren’t specifically called security patches, the entity is required to search through every update from its vendors to identify those that are really security patches in disguise.

If R2 were an objectives-based requirement like R3 is, you could certainly argue that one way to achieve that objective might be to make a point of applying not only vendor-identified security patches, but upgrades that contained security patches but weren’t called that. But R2 isn’t objectives-based. It specifically says that entities must look for every “cyber security patch” available from a vendor, and they must do this every 35 days. It would be a big stretch to say that this includes every upgrade that mitigates software vulnerabilities, whether or not it is called a security patch.

I brought this question up to Eric at the end of his presentation. He assured me that he wasn’t saying that entities were required to find all security patches, whether or not they’re called that; rather he only said that finding non-security patches (when possible) should be part of the entity’s patch management program. He further said that a Potential Non-Compliance (which replaces "Potential Violation") finding might be assessed if an entity hadn’t done this.

I'll have more to say on this later.

As it turns out, yesterday I had started working on a more comprehensive post on what I find to be an unfortunate (but understandable) suspicion that many NERC entities have of non-prescriptive requirements. The exchange I’ve just described fits in very well with that post, as you will discover – if you’re not already sick of the subject, and of me for constantly harping on it – when I put out that post, hopefully next week.

Note 5/26/17: I just revised this post, as part of writing a new post on security patches. When I wrote this, I didn't understand exactly what Potential Non-Compliance meant, so a few sentences I wrote toward the end didn't make sense.

The views and opinions expressed here are my own and don’t necessarily represent the views or opinions of Deloitte.

[i] You do need to keep in mind the difference between an actual security patch and a functionality upgrade that provides better security capabilities. An example of the latter would be an upgrade that extended the acceptable password length on a device. While this would of course be a great improvement for securing the device, it wouldn’t be a security patch, which is intended to mitigate a software vulnerability. Therefore, functionality upgrades aren't in scope for CIP-007 R2. For a good discussion of this topic, see this post.


  1. A related point is that a strict reading of the Guidelines and Technical Basis ("The intent of Requirement R2 is to require entities to know, track, and mitigate the known software vulnerabilities associated with their BES Cyber Assets.") would also apply where there was no patch to apply. There may be vulnerabilities, such as HeartBleed, that affect BCAs for which there is no patch and will never be a patch. Do the entities have to mitigate newly discovered but unpatchable vulnerabilities?

  2. I agree with your rhetorical question, Anonymous. If the objective of R2 were actually to mitigate software vulnerabilities, then they should really be addressing unpatched as well as patched vulnerabilities! Very good point.

  3. It turns out I heard wrong when I reported in this post that there was a new compliance designation called Technical Non-Compliance. The new designation is Potential Non-Compliance, and it is simply a replacement for the previous Potential Violation.

    I think part of the reason for renaming this is to emphasize that there can be uncertainty not only about what an entity did (which is why the Potential Violation was called that), but also about what the requirement itself can mean (so the entity may have acted in good faith but not understood an ambiguous requirement).