Wednesday, January 9, 2019

Lew Folkerth on CIP-013: Part 1

(Note on Feb. 16, 2019: I have substantially rewritten part of this post, which I no longer found to be a good description of the role of risk in prescriptive vs. risk-based requirements. In the process, I've made this post - already my longest, I believe - even longer! What can I say? As Lew's January article points out - which I'll discuss in a few weeks, but which you can read now by downloading RF's January newsletter and going to page 10 - risk management is the future of all the CIP standards, not just CIP-013. I believe, and Lew seems to as well, that it is inevitable that all of the CIP standards will be risk-based in the not-too-distant future, like maybe three years. Every CIP compliance professional is going to have to become familiar with risk management, not just the ones working on CIP-013 now)

Two weeks ago, when I downloaded the most recent RF newsletter and went to Lew Folkerth’s column (called The Lighthouse), my heart started to beat faster when I saw that his topic this time is supply chain security – and really CIP-013. I’ve been quite disappointed with literally everything else I’ve read or heard so far from NERC about CIP-013, since none of it has directly addressed the fundamental difference between that standard and literally all other NERC standards (including the CIP ones): namely that CIP-013 is a standard for risk management first and supply chain security second (more specifically, the risks being managed by the standard are supply chain security risks). I was very pleased to see that Lew not only gets the point about CIP-013, but has a deep understanding that allows him to communicate what he knows about the standard to the rest of the NERC community.

I was quite heartened when I read Lew’s first sentence about the standard: “CIP-013-1 is the first CIP Standard that requires you to manage risk.” Yes! And it got better after that, since he not only described the standard very well, but he laid out a good (although too concise) roadmap for complying with it, and made some very good points about how it will be audited (Lew was a longtime CIP auditor until he moved into an Entity Development role at RF, where he still focuses on CIP). On the other hand, I do disagree with one thing he says, which I’ll discuss below. I’m dividing this post into three parts. This first part discusses what Lew says about CIP-013 itself. The second will discuss what Lew says (and what I say) about how to comply with CIP-013. The third post will discuss how CIP-013 will be audited, including a subsequent email discussion I had with Lew on that topic.

Lew makes three main points about CIP-013:

I. Plan-based
His first point is “CIP-013-1 is a plan-based Standard…You are required to develop (R1), implement (R2), and maintain (R3) a plan to manage supply chain cyber security risk. You should already be familiar with the needs of plan-based Standards, as many of the existing CIP Standards are also plan-based.” I don’t agree that any existing CIP standards (other than CIP-013 itself) are plan-based[i], although several requirements are. Specifically, CIP-003 R2, CIP-010 R4 and CIP-011 R1 all require that the entity have a plan or program to achieve a particular objective. I would also argue that CIP-007 R3 is plan-based, even though it doesn’t actually call for a plan. This is because I don’t see that there’s any way to comply with the requirement without having some sort of plan, although perhaps a fairly sketchy one.[ii]

What are the objectives of these other plan-based requirements? They are all stated differently, but in my opinion they are all about risk management, even though CIP-013 R1 is the first requirement to state that outright. And if you think about it (or even if you don’t), when you’re dealing with cyber security, risk management is simply the only way to go. Let me explain by contrasting the CIP standards with the NERC Operations and Planning (O&P) standards, which deal with technical matters required to keep the grid running reliably.

In the O&P standards, the whole objective is to substantially mitigate particular risks - specifically, risks that can lead to a cascading outage of the Bulk Electric System – not manage them. These standards are prescriptive by necessity: For example, if a utility doesn’t properly trim trees under its transmission lines, there is a real risk of a widespread, cascading outage (which is what happened in 2003 with the Northeast blackout, although there were other causes as well). Given the serious impact of a cascading outage, there needs to be a prescriptive requirement (in this case, FAC-003) telling transmission utilities exactly what needs to be done to prevent this from happening, and they need to follow it. I totally agree that there’s no alternative to having a very prescriptive requirement for the utility to regularly trim all of their trees. It has to be all of the trees under its lines, not just every other one or something like that; a single overgrown tree can cause a line to short out (more specifically, cause an overcurrent that will lead to the circuit breaker opening the line).

The prescriptive requirements in CIP take this same approach: They are designed with the belief that, if you take certain steps like those required for CIP-007 R2, you will substantially mitigate a certain area of risk - which, in the case of that requirement, is the risk posed by unpatched software vulnerabilities. You need to follow those steps (which include rigid timelines for various tasks, as any CIP compliance professional knows only too well), and if you don’t, there will almost inevitably be severe consequences. But, if you take those steps religiously, the risk of unpatched software vulnerabilities causing a BES outage will be close to eliminated. And when we’re talking about the risk of a cascading BES outage like happened in 2003, it seems there is no choice but to eliminate risk as much as possible, not simply lower it.

But are the severe consequences really inevitable in this case? If one utility doesn’t patch five systems in their Control Center for two months, will there inevitably be some sort of BES outage (let alone a cascading one)? It’s far from certain. How about if all the utilities in one region of the country don’t patch any of their Control Center servers for one year? Will there inevitably be an outage then? Again, the answer is no, but obviously the risk of a BES outage – and even a cascading one – is much larger in this case than in the first one. The risk (which I calculate as equal to likelihood plus impact) is larger for two reasons: 1) The likelihood of compromise is higher due to the fact that the interval between patches is much longer; and 2) The potential impact on the BES is much more serious due to the fact that a number of utilities in one region of the country are all not patching. An attack that would compromise one control center would be very likely to compromise others as well, since they’re presumably subject to the same unpatched vulnerabilities.

I don’t think it will be a great surprise to anyone that the best way for utilities to lower the serious risk in the second scenario would be for them all to patch much more regularly, say every 35 days as required by CIP-007 R2. This will greatly lower both likelihood and impact, although there is definitely a cost to doing this! But since we have greatly lowered risk by getting all utilities in the area to patch every 35 days, why stop there? Could we lower risk even more by having them patch every ten days? Absolutely. Then why not go further? Why not have them patch every day, or even every hour?

Of course, at this point (if not before), you start thinking about the cost of lowering the patching interval. If a utility is going to patch all servers in their control centers every day, they are probably going to have to employ a fairly large team that does nothing but patch servers day in and day out. This is going to cost a lot of money, especially when you consider that they will quickly be so bored with the job that they’ll probably jump ship, requiring constantly finding and training replacement team members.

So where do you draw the line here? Of course, CIP-007 R2 draws it at 35 days, meaning every 35 days there needs to be a new cycle of determining patch availability (for every piece of software installed in the ESP), determining applicability, and then either applying the patch or implementing a mitigation plan. That might be perfect for say the utility’s EMS system, whose loss would have a very high impact on the BES – in fact, I know of some utilities that argue that 35 days is way too long for the EMS, and the interval should be 15 days. But there are other systems, say in Medium impact substations, whose loss doesn’t have the same level of impact. For them, could the patching cycle be lengthened to 45 or 60 days without much increase in overall risk? Quite possibly.

Here’s another consideration: How often does the software vendor release new patches? For some devices that are Medium impact BES Cyber Systems, the answer might be “close to never”. Is it really necessary to contact those vendors every 35 days, given that this takes somebody’s time that might be spent doing something else that does more to reduce BES risk? In this case,  the interval for checking patch availability might be lengthened to 90 days without increasing the probability of compromise – and therefore risk – very much.

However, as we all know, a prescriptive requirement like CIP-007 R2 doesn’t allow for consideration of risk at all. Every NERC entity with Medium or High impact BES Cyber Systems needs to follow exactly the same patch management process for every system, whether it’s the EMS that controls power in a major metropolitan area, or a relay on a less impactful 135kV transmission line; and they need to check availability every month for every software package installed in the ESP, regardless of whether the vendor releases patches monthly or they haven’t released one at all for 20 years. The extra funds required to comply with a requirement like this (and CIP 7 R2 is without doubt the most resource-intensive of all the CIP requirements, although I hear that CIP-010 R1 gives it a pretty good run for its money) have to come from somewhere, and since every entity I know has a fairly fixed budget for cyber security and CIP compliance, it will have to come from mitigation of other cyber risks – such as phishing, which isn’t addressed at all in CIP now (of course, I’m sure all utilities are devoting at least some resources to anti-phishing training, which is good. However, the most recent Wall Street Journal article on the Russian supply chain attacks makes it pretty clear that some utilities are being compromised by phishing attacks, although whether that compromise has reached their control networks is still unknown).

A requirement like CIP-013 R1 is different. It requires the entity to develop a plan to mitigate risks in one area – in this case, supply chain security. It is up to the entity to decide how they’ll allocate their funds among different supply chain risks. And the best way to do that is to put the most resources into mitigating the biggest risks and the least resources – or none at all – into mitigating the smallest risks. That’s why I have always believed that the first step in CIP-013 compliance - and Lew confirms this in his article - is to identify the important supply chain threats to BES Cyber Systems that your organization faces, then rank them by their degree of risk to the BES. The amount of resources you allocate toward mitigating each risk should be directly proportional to its degree (and the lower risks won’t receive any mitigation resources). This way, your limited funds can achieve the greatest results, because they will mitigate the greatest possible amount of overall risk.

This is why CIP-013 doesn’t say the entity must take certain steps to mitigate risk X and other steps to mitigate risk Y, ignoring all of the other risks. Instead, CIP-013 does exactly what I think all cyber security standards should do: require the entity to follow the same process to mitigate cyber risks that they would follow if they weren’t subject to mandatory cyber security requirements. But entities are mandated to follow this process. It’s not a “framework” that they can follow or not, with no serious consequences if they don’t.

The point about mandated is key: We all know that utility management makes much more money available to mitigate cyber risks because NERC CIP is in place and violations carry hefty penalties (and the non-monetary consequences are at least as bad as the actual penalties), than they would if CIP weren’t in the picture. The problem is to rewrite the CIP standards so that they don’t distort the process of risk identification, prioritization and mitigation that an entity would follow in their absence – yet still keep the money spigot open because they’re mandatory. CIP-013 comes close to achieving this goal in the domain of supply chain security risk management. We need similar standards for all the other domains of BES cyber security.

Fortunately, the CIP standards are gradually moving toward eliminating prescriptive requirements and implementing plan-based (i.e. risk-based) ones. In fact, the two major requirements drafted (or revised) and approved since CIP v5 (CIP-003 R2 and CIP-010 R4) are both plan-based, and the three new standards drafted since v5 (CIP-014, CIP-013 and CIP-012) are also all plan-based; moreover, it’s almost impossible to imagine a new prescriptive CIP requirement being drafted. But prescriptive requirements like CIP-007 R2, CIP-010 R1 and CIP-007 R1 remain in place, where they continue to require much more than their “fair share” of mitigation resources.

II. Objective-based
Lew’s second point is “CIP-013-1 is an objective-based Standard.” I agree with this, too, but I think it’s redundant. If a requirement is plan-based, it’s ipso facto objective-based, since the purpose of any plan is to achieve its objective. I pointed this out to Lew in an email, and he replied that the redundancy was intended. He continues “I’m trying to lay a strong foundation for future discussion. A plan without an objective isn’t worth the media it’s recorded on. But some entities have had difficulty grasping this idea and need to have it reinforced.” Consider it reinforced!  

Lew goes on to identify the objectives of CIP-013, and this is where I disagree with him (although FERC deserves partial blame for this. See below). To identify the objectives, he goes to the second paragraph of FERC Order 850, which approved CIP-013 in October. Here, FERC states that the four objectives of CIP-013 are

  1. Software integrity and authenticity;
  2. Vendor remote access protections;
  3. Information system planning; and
  4. Vendor risk management and procurement controls.
And where did FERC determine that these are the objectives of CIP-013? If you pore through the standard, I can promise you’ll never find these stated together anywhere, although you’ll find them individually in different places – along with other objectives that don’t seem to have made it into the Final Four, for some reason.

But FERC didn’t make these objectives up; they came from an authoritative source – FERC itself! Specifically, they came from FERC’s Order 829 of June 2016, which FERC issued when they ordered NERC to develop a supply chain security standard in the first place. So it seems FERC, when looking for the purpose of CIP-013, decided that the people who drafted the standard weren’t to be trusted to understand its real purpose, and the best source of information on this topic is…FERC (although, since only one of the five Commissioners who approved Order 829 is still on the Commission, it’s very hard to say that FERC 2018 is the same as FERC 2016).

This would all just be an amusing piece of trivia, if it weren’t for two things. First, FERC’s four objectives are very specific, and are far from being the only objectives found in CIP-013. For example, the first two objectives are found in R1.2, but there are four more items in R1.2 that FERC didn’t include in their list, for some reason. I see no reason why all six of the items in R1.2 shouldn’t be included in a list of objectives of CIP-013, although even that would hardly constitute a complete inventory of CIP-013’s objectives.

Since FERC didn’t do a good job of it, how can we summarize the objectives of CIP-013? It’s not hard at all. We just need to go to the statement of purpose in Section 3 at the beginning of the standard: “To mitigate cyber security risks to the reliable operation of the Bulk Electric System (BES) by implementing security controls for supply chain risk management of BES Cyber Systems.” In my opinion, this is a close-to-perfect summary of what CIP-013 is intended to do.

But Lew isn’t just quoting FERC for people’s edification; he’s saying that the objectives FERC lists should be the objectives that your supply chain cyber security risk management plan aims to achieve. Specifically, he says “Your actions in developing and implementing your plan should be directed toward achieving these four objectives. You should be prepared to demonstrate to an audit team that you meet each of these objectives. These objectives are not explicitly referenced in the Standard language. However, as outlined in the FERC Order, the achievement of these objectives is the reason the Standard was written.”

You’ll notice that Lew states that a NERC entity will need to demonstrate to the auditors that their plan achieves FERC’s four objectives. Now, even though Lew isn’t an auditor any more, I know that his words are taken very seriously by the auditors in all of the NERC Regions. This means most, if not all, auditors will pay attention to this sentence, and therefore you can expect many or even most auditors to ask you to show them that your plan meets these four objectives.

Since I obviously don’t think that FERC’s four objectives are a completely accurate summary of the purpose of CIP-013, am I now saying that Lew has provided misleading advice to NERC entities, so that they’ll end up addressing meaningless or even harmful objectives in their plans? No, there’s no harm in telling NERC entities that their auditors will want to determine if their CIP-013 plan meets each of FERC’s four objectives, since as I’ve said those objectives are all found somewhere in the standard anyway. The harm is that the real objective of CIP-013 is what’s found in the Purpose statement in Section 3; that statement encompasses FERC’s four objectives, and a lot more. This needs to be brought to people’s attention, since neither FERC nor NERC have done so yet.

Why doesn’t Lew instead say that auditors should make sure the entity’s CIP-013 plan meets the stated objective (purpose) of the standard? This could still be followed by FERC’s four things – in order to provide more detail. I think that would work, as long as it’s made clear that FERC’s four things are in no way a summary of everything that needs to be addressed in the plan. The Purpose statement provides that summary. But is that enough detail to make the requirement auditable? That’s a question I’ll discuss below and in the third post in this series.

III. Lew’s (implicit) methodology for CIP-013 compliance
Lew’s third point is “CIP-013-1 is a risk-based Standard”. He explains what that means, and in the process specifies (very concisely) a complete compliance methodology, when he writes:

You are not expected to address all areas of supply chain cyber security. You have the freedom, and the responsibility, to address those areas that pose the greatest risk to your organization and to your high and medium impact BES Cyber Systems.

You will need to be able to show an audit team that you have identified possible supply chain risks to your high and medium impact BES Cyber Systems, assessed those risks, and put processes and controls in place to address those risks that pose the highest risk to the BES.

This passage actually describes the whole process of developing your supply chain cyber security risk management plan to comply with CIP-013 R1.1, although it is very densely packed in the passage. Since I’m a Certified Lew Unpacker (CLU), I will now unpack[iii] it for you (although my unpacked version is still very high-level):

Lew’s CIP-013 compliance methodology, unpacked for the first time!
A. The first step in developing your plan (in Lew’s implicit methodology) is that you need to consider “all areas” of supply chain cyber security. I interpret that to mean you should in principle consider every supply chain cyber threat as you develop your plan. Of course, it would be impossible to do this – there are probably an almost infinite number of threats, especially if you want to get down to a lot of detail on threat actors, means used, etc. Could you simplify that by just listing the most important high-level supply chain cyber threats likely to impact the electric power industry? Sure you could, but do you have that list?

And here’s the rub: It would be great if there were a list like that, and it probably wouldn’t be too hard for a group of industry experts to get together and compile it (I’m thinking it probably wouldn’t have many more than ten items). Even better: The CIP-013 SDT was a group of industry experts. Why didn’t they put together a list like that and include it in CIP-013 R1? As it is, there is no list of threats (or “risks”, the word the requirement uses. I have my reasons for preferring to use “threats” – which I’ll describe in a moment) in the requirement, and every NERC entity is on its own to decide what are the most important supply chain cyber security threats it faces. This inevitably means they’ll all start with different lists, some big and some small.

There’s a good reason why the SDT didn’t include a list in the requirement (and, even though I attended a few of the SDT meetings, I’ll admit this omission never even occurred to me): FERC only gave them one year to a) draft the standard, b) get it approved by the NERC ballot body (it took four ballots to do that, each with a comment period), c) have the NERC Board approve it, and d) submit it to FERC[iv] for their approval (and FERC’s approval took 13 months, longer than they gave NERC to develop and approve the standard in the first place). If the SDT had taken the time to have a debate over what should be on the list of risks, or even whether there should be a list at all in the standard, they would never have made their deadline.

This is a shame, though. To understand why, consider one plan-based requirement that does include a list of the risks that need to be considered: CIP-010 R4. Looking at this requirement can give you a good idea of the benefits of having the list of risks in the requirement.

CIP-010-2 R4 requires the entity to implement (and, implicitly, to develop in the first place) “one or more documented plan(s) for Transient Cyber Assets and Removable Media that include the sections in Attachment 1”.  When you go to Attachment 1, you find that it starts with the words “Responsible Entities shall include each of the sections provided below in their plan(s) for Transient Cyber Assets and Removable Media as required under Requirement R4.” Each of the sections describes an area of risk to include in the plan. These are stated as mitigations that need to be considered, but you can work back from each mitigation to identify the risk it mitigates very easily.

For example, Section 1.3 reads “Use one or a combination of the following methods to achieve the objective of mitigating the risk of vulnerabilities posed by unpatched software on the Transient Cyber Asset: Security patching, including manual or managed updates;  Live operating system and software executable only from read-only media; System hardening; or Other method(s) to mitigate software vulnerabilities.” (my emphasis)

The risk targeted by all of these mitigations can be loosely described as the risk of malware spreading in your ESP due to unpatched software on a TCA. What are you supposed to do about it? Note the words I’ve italicized. They don’t say you need to “consider” doing something, they say you need to do it. And since Attachment 1 is part of CIP-010 R4, this means you need to insert the word “must” before this whole passage (in fact, that word needs to be inserted at the beginning of each of the other sections in Attachment 1 as well). You must achieve the objective of mitigating this particular type of risk.

But if I’m saying CIP-010 R4 is a plan-based (as well as objective-based and risk-based) requirement, how is that compatible with the (implicit) use of the word “must”, at the beginning of this section as well as all the other sections? Does this turn R4 into a prescriptive requirement?

I’m glad you asked that question. Even though you have to address the risk in your plan, you have complete flexibility in how you mitigate that risk. The requirement still isn’t prescriptive, because it doesn’t prescribe any particular actions. The same approach applies to each of the other sections of Attachment 1: The risk underlying each one needs to be addressed in the plan, while the entity can mitigate the risk in whatever way it thinks is best (although it must be an effective mitigation. Lew has previously addressed what that means).

Obviously, somebody who is complying with CIP-010 R4 will know exactly what risks to include in their plan for Transient Cyber Assets and Removable Media, due to Attachment 1 (and remember, since Attachment 1 is called out by R4 itself, it is actually part of the requirement – not just guidance). They can add some risks if they want (my guess is very few will do that), but at least they have a comprehensive list to start with.

And more importantly, the auditors have something to hang their hat on when they come by to audit. They can ask the entity to show them that they’ve addressed each of the items listed in Attachment 1, then they can judge them by how well they’ve addressed each one (i.e. how effective the mitigations described in their plan are likely to be, and how effective they’ve actually turned out to be – since most audits will happen after implementation of the plan, when there’s a year or two of data to consider). This is the main reason why I now realize it’s much better for a plan-based requirement to have a list of risks to address in the requirement itself, although that’s not currently in the cards for CIP-013 (it would be nice if NERC added this to the new SAR that will have to be developed to address the two or three changes in CIP-013 that FERC mandated in Order 850, but for some reason they don’t take orders from me at NERC).

You can see the difference this makes – i.e. the difference it makes to have the list of risks that must be addressed in the plan in the requirement itself – by comparing the RSAW[v] sections for CIP-010 R4 and CIP-013 R1. The former reproduces all of the detail in Attachment 1 – making the RSAW a great guide both for auditors and for the entity itself, as it prepares its supply chain cyber security risk management plan for CIP-013.  The R1 Compliance Assessment Approach section goes on for more than a page.

And how about the CIP-013 R1 RSAW? Here’s the entirety of what it says for the R1 Compliance Approach: “Verify the Responsible Entity has developed one or more documented supply chain cyber security risk management plans that collectively address the controls specified in Part 1.1 and Part 1.2.” In other words, make sure the entity has complied with the requirement, period. Not too helpful if you’re drawing up your plan, but what more can be said? The RSAW can only point to what is required by the wording of R1.1, and since there is no Attachment 1 to give the entity a comprehensive idea of what they need to address in their plan, all the RSAW can do is point the reader to the wording of the requirement, which only says “The plan shall include one or more process(es) used in planning for the procurement of BES Cyber Systems to identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services resulting from: (i) procuring and installing vendor equipment and software; and (ii) transitions from one vendor(s) to another vendor(s).”

Not a lot to go on here, although I guess the RSAW could have just turned this into a question: “Does the plan include one or more processes used in planning for the procurement of BES Cyber Systems to identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services resulting from: (i) procuring and installing vendor equipment and software; and (ii) transitions from one vendor(s) to another vendor(s)?” That would have at least pointed to the need to address these two items - although I break them into three:

  1. Procuring vendor equipment and software;
  2. Installing vendor equipment and software; and
  3. Transitions between vendors.
I think it would be good if the RSAW specifically listed these items (whether it’s two or three doesn’t matter to me) as being required in the plan, since they’re definitely in the requirement.

Even though it’s too late to have a list of risks to address in CIP-013-1 R1 itself, it’s not too late for some group or groups – like the CIPC or NATF, or perhaps the trade associations, for each of their memberships – to develop a comprehensive high-level supply chain security risk list for NERC entities complying with CIP-013 (as well as any utilities or IPPs who don’t have to comply, but still want to).

While the auditors couldn’t give a Potential Non-Compliance finding to an entity that didn’t include all of the risks on the list in their plan, they would be able to point out an Area of Concern – and frankly, that’s probably better anyway. I don’t think there should be a lot of violations identified for CIP-013. Given that R1.1 lacks any specific criteria for what should be in the plan, I see no basis for an auditor to assess any violation of R1.1, unless the entity submits a plan that doesn’t make an attempt to seriously identify and mitigate supply chain security risks at all. More on auditing in Part 2 of this post, coming soon to a computer or smartphone near you!

B. The second step in developing your supply chain cyber security risk management plan, in compliance with CIP-013 R1.1, is to decide the degree of risk posed by each threat (and this is why I said earlier that I prefer to also use the word threats, even though CIP-013 just talks about risks. It’s very awkward to talk about assigning a degree of risk to a risk. A risk is inherently a numerical concept; a threat is just a statement like “A software vendor’s development environment will be compromised and malware will be embedded in the software”. You can legitimately ask “What is the risk posed by this threat, and how does it compare to - say - the risk posed by the threat that malware will be inserted into a software patch in transit between the vendor and our organization, and we will install the patch?” It’s much more difficult (although not impossible, I admit) to ask “What is the degree of risk posed by this risk, and which of these two risks is riskier?” It begins to sound like the old Abbot and Costello routine, Who’s on first?).

How do you quantify the degree of risk posed by a particular threat? You need to consider a) the potential impact on the BES[vi] (remember, all risks in CIP-013, like all NERC standards, are risks to the BES, not to your organization) if the threat is realized in your environment, as well as b) the likelihood that will happen. You need to combine these two measures in some way, to come up with a risk score. Assuming that you’re assigning a high/medium/low value to both impact and likelihood (rather than trying to pretend you know enough to say the likelihood is 38% vs. 30%, or the potential impact on the BES is 500MW vs. 250MW, which you don’t), I recommend adding them. So if you assign values of 1, 2 and 3 to low, medium and high, and the likelihood is low but impact is high (or vice versa), this means the risk score for this threat is 4 out of a possible 6 (with 2 being the lowest possible score).

C. The third step is to rank all of the threats by their risk scores. Once you have your ranked threat list, you instantly know which are the most serious supply chain cyber security threats you face: they’re the ones at the top of the list.

D. The fourth step is to develop a risk mitigation plan for each of the top threats. As mentioned earlier, there’s no question that you won’t be able to completely mitigate any cyber threat. The most you should aim for is to bring the level of risk for each threat on your “most serious” list down to a common lower level (say, you’ll aim to bring all threats with a risk score of 5 or 6 down to a risk score level of 3 or 4), at which point some other unmitigated threats will then pose higher levels of risk; if you still have resources available to you, you should consider mitigating those “second-tier” threats as well. But whatever your available budget, you should invest it in mitigating the highest risks – that way, you’re getting the most bang for each hard-earned buck.


While you’re anxiously awaiting Part 2 of this post, you might re-read (or read for the first time) this post describing my free CIP-013 workshop offer, which is still on the table. If you would like to discuss this, I'd love to hear from you!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.


[i] You could certainly argue that CIP-014 is a plan-based standard, although I think it falls short in a few ways. So let’s leave it out now, and say we’re just talking about cyber security standards.

[ii] On the other hand, I don’t call CIP-008 and CIP-009 plan-based, even though they both explicitly call for plans. They are definitely objectives-based, with the objective being an incident response plan or backup/recovery plan(s) respectively.  But in my view of a plan-based requirement, the objective is always managing a certain area of risk. In CIP-010-2 R4, it’s risk from use of Transient Cyber Assets and Removable Media. In CIP-003-7 R2, it’s risk of cyber or physical compromise of BCS at Low impact assets. In CIP-011 R1 it’s risk of compromise of BES Cyber System Information. And in CIP-007 R3 it’s risk of infection by malware. But CIP 8 and 9 aren’t directly risk-based, not that they’re bad standards of course. They both call for development and testing of a plan, but risk only enters tangentially into the plan (if at all), since say a CSIRP for an entity in a very risky environment should probably be more rigorous than one for an entity in a “normal” environment.

[iii] I readily admit that what I write in the rest of this post isn’t all just an expansion of what Lew says in these two short paragraphs! However, each of the compliance steps that I discuss below is implicit in those two paragraphs. If you don’t believe me, I can prove it to you using advanced mathematics.

[iv] I’ll admit this is just my speculation. It’s not so much that the SDT wanted to draw up the list and didn’t have time to do it, as that they never had the leisure to even consider more philosophical questions like this; they were racing against the clock during their whole existence.

[v] RSAW stands for Reliability Standard Audit Worksheet. It’s the guide that NERC auditors use for audits. And for that reason, it’s also the guide that NERC entities use as they set up their compliance program for a particular standard.

[vi] Lew made the following comment at this point in the post: “This is a concept that needs more attention. I think entities should give consideration to those threats that could result in damage to difficult-to-repair equipment, such as large transformers, generators, turbines, boiler feed pumps, etc. If you can take over the control system for such equipment and run it to destruction, that is a risk with higher consequence that merely opening a breaker and causing a short outage. And I think this is the type of compromise that a nation-state would be interested in.” A word to the wise!

No comments:

Post a Comment