Wednesday, June 26, 2019

Kevin Perry goes further



In yesterday’s post, I pointed out comments that Brandon Workentin of Forescout Technologies had made about an earlier post that made the distinction between vendors and suppliers - and that distinction had been brought up to me by John Work of the Western Area Power Administration. Do you get the feeling that in this blog I’m gathering fame and vast fortune just by exploiting the ideas of others?

If you’re not sure about the answer to that question, let me point out that this post simply repeats verbatim an email that Kevin Perry, former chief auditor of SPP Regional Entity but now “retired”, sent me today. Brandon had pointed in a different direction than I’d pointed in the first post, and now Kevin goes further in that direction. I won’t spell this direction out for you, since as you know, I’m incapable of an original thought.

Kevin said “Actually, I think the vendor (integrator) poses substantially greater risk than the manufacturer in many instances.  If the vendor’s support team has remote electronic access to the entity’s systems, you have opened a path that you cannot completely control and protect from.  Bear in mind that recent attacks have exploited a vendor and its connection to the real target.

“Yes, the manufacturer can make a product that has security risks (poor coding, poor testing, huge complexity, poor hardware quality controls, etc.), but you can more readily mitigate that risk than you can when you rely on a trusted communication path with a third party.”

Good points, Kevin and Brandon! Keep them coming…(and while I’m at this, I’d like to apologize to about 4 or 5 people that have written in with good ideas related to posts I’d put up in recent months. I’d promised to write a post on each one of those ideas, yet because of the press of work – and the fact that the Russians and Lew Folkerth have been continually demanding my attention – I haven’t done that).

Tom

PS: (a few hours later) It occurred to me that by juxtaposing Lew Folkerth with the Russians in a single sentence, I might have inadvertently lent credence to the wild rumors about Lew colluding with the Russians in their various nefarious activities in the US. Let me be perfectly clear: I have investigated these rumors, and I have found no indictable evidence that Lew Folkerth colluded with the Russians in any way!


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Tuesday, June 25, 2019

More thoughts on suppliers vs. vendors



Last week, I put up this post that discussed the difference between suppliers and vendors, and how it can be helpful for CIP0-013 compliance to distinguish between the two. Of course, in many cases, the same entity both manufactures the hardware / develops the software and sells it to you; in those cases, it doesn’t matter whether you call them a supplier or a vendor.

But if the manufacturer or developer doesn’t sell directly but goes through some sort of dealer channel, then the vendor is the dealer, and the supplier is the manufacturer/developer. Big organizations like Microsoft, Cisco and SEL do this. The upshot is that you won’t have a contract – which is a good way to mitigate supply chain risk, although not the be-all/end-all that some describe it to be – with the supplier, just with the vendor. Yet the big risk is usually with the supplier, not the vendor.

In an end note, I did say “Even a pure dealer will be the subject of some supply chain risk – for example, they may not take proper measures to secure the product before shipment, and it could be tampered with en route; that risk needs to be mitigated, using contract language or another means. And if the dealer also installs the product, there’s a lot of risk to mitigate…”

The day after this post, my longtime friend Brandon Workentin of Forescout Technologies wrote in to point out that in some cases the Vendor can be a substantial source of risk. He pointed out that a systems integrator, who can be responsible for installing and supporting the product (and both installation and ongoing support – especially patching – are specifically in scope for CIP-013), can introduce a substantial amount of risk[i].

So it might have been better for me to distinguish between product risk – which will be almost entirely the domain of the supplier – and installation/support risk – which would be the domain of the systems integrator. I can certainly see that in some cases, the total risk introduced by the integrator/dealer might be almost as great as that introduced by the supplier.

There are lots of subtle points like this hidden in CIP-013 – or more generally in supply chain cyber security risk management planning. Understanding these points can mean the difference between developing a plan that will efficiently mitigate supply chain security risk, and one that ends up putting a big burden on your organization, yet at the same time yields must less risk mitigation.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

[i] And by “introduce risk”, I don’t mean the integrator is a mope that knows nothing about security and is bound to leave the system in a very insecure state. I simply mean that the impact on the BES of their doing something wrong could be high (think of what might happen if your EMS or SCADA system had been installed with inadequate security controls). And since risk is a combination of likelihood and impact, even though the integrator might have greatly reduced the likelihood of a problem occurring by training their people very well and implementing very safe procedures (meaning the likelihood component of the risk would be low), the risk will still be medium or high because the impact would be high.

Thursday, June 20, 2019

Lew nails another one!



I was quite excited when I got RF’s latest newsletter yesterday, since I knew that Lew Folkerth was going to have his (alleged) last article on CIP-013 in it. I read it thoroughly this morning, and I have to say this is the best of his articles on CIP-013 so far. What’s also good news is that, in writing the article, he realized he was going to have to separate it into two (this has happened to me a number of times also).

In this article, Lew planned to write about CIP-005 R1.4 and R1.5, and CIP-010 R1.6 – which are three new requirement parts that will become enforceable on July 1, 2020, the same day as CIP-013-1 becomes enforceable. But he ended up devoting the entire article to just the CIP-005 parts; the CIP-010 part will be the subject of the next one. He implies this will be the last of his CIP-013/supply chain risk management articles, but I certainly hope it won’t be – there’s a lot more that can be said about this.

You can find the article by going to RF’s home page and finding the May-June 2019 newsletter, then clicking on “The Lighthouse” in the table of contents. However, Lew gave the article by itself to me in PDF format, and I’ll be glad to send that to you if you want to drop me an email at the address below. In fact, for a limited time, Lew has made available a single PDF including all four of his CIP-013 articles so far, and he is offering that for the same price as this article by itself - $0.00 (US)! You should act now to take advantage of this incredible offer! No salesman will call…

As I started reading the article, I was struck by one point after another that I hadn’t realized, and I started writing them all down. After writing down about 4 or 5, I realized I might as well just let you read the paper to get the rest, but here are the most important takeaways I got out of Lew’s article:

  1. I admit that I had a fairly low opinion of the new requirement parts that the CIP-013 drafting team added to CIP-005-6 R2 and CIP-010-3 R1. This was because I thought of these as new prescriptive requirements that would conflict with the risk-based, objectives-based nature of CIP-013. I explained that in this post (and the earlier one linked in it). But reading Lew’s article made it very clear to me that CIP-005-6 R2.4 and R2.5 are objectives-based, since they give you an objective to achieve, and don’t specify how you need to achieve it.
  2. However, are these two requirement parts also risk-based? My answer to that is yes, even though the word “risk” isn’t to be found anywhere in them. In determining how you will comply with the two requirement parts (that is, what technologies or procedures you should deploy), you certainly should take risk into account. You should never put a $50 lock on a $10 bike, and you shouldn’t, for example, implement and tune a layer three firewall just for compliance with R2.5, unless the likelihood and/or impact of a compromised remote access session being allowed to continue are significant.
  3. A good indication that these parts are risk-based is found on page 15, where Lew discusses how you can categorize, classify and prioritize different types of communications traffic – and, of course, apply different controls to different traffic depending on its category, classification and priority. You simply can’t do this in a prescriptive requirement like CIP-007 R2 or CIP-010 R1, where you have to do the same things to every component of a Medium or High BES Cyber System, or risk getting your head cut off.
  4. Lew’s item 2 on page 15 points out that these requirement parts include undefined terms, which I take to include “determine”, “system-to-system”, “disable”, and – the most glaring omission of all in CIP-013 - “vendor”. Of course, these are hardly the first cases of undefined terms in the CIP standards. CIP v5 included many undefined terms, including “generation”, “station” and “substation” in CIP-002 R1 Attachment 1, as well as terms like “programmable” (in the Cyber Asset definition), “affect the reliable operation of the BES” (in the BES Cyber Asset definition), “adverse impact” (in the BES Cyber Asset definition), “associated with” (in CIP-002 R1 Attachment 1), “security patch” (in CIP-007 R2), and “custom software” (in CIP-010 R1); so this is hardly a new development. I certainly understand there are very good reasons why all of these terms were left undefined, but on the other hand some people at NERC and elsewhere talk about the CIP standards as if they were clear as day, so that if entities are confused and get violations, it must be because they just aren’t trying hard enough. This is far from being true.
  5. Lew’s point about these undefined terms in CIP-005-6 R1.4 and R1.5 is exactly the point he made in 2014, as it began to become clear that there would never be clear answers to most of the ambiguities, implicit requirements and undefined terms in CIP v5: It’s up to the entity to a) look at all the available guidelines, guidance, etc. and then b) determine and document for themselves how they are interpreting these questions and definitions (although in his article he suggests that the entity should “use commonly accepted definitions”. It would be nice if there were commonly accepted definitions for these terms, but in most cases there simply is none).
  6. In item 3 on page 14, Lew notes that all data communications, including dial-up and serial, need to be included in the scope of CIP-005-6 R2.4 and R2.5 – not just routable communications. This might seem to some people to constitute a huge increase in scope in itself, but keep in mind that the fact that these are risk-based requirements is your friend. Given that there has never been a successful cyberattack over serial communications, the likelihood of such attacks is extremely low, meaning the risk is extremely low – so you just don’t have to make the same effort to remediate risks in serial communications as you do for routable communications (I’ll admit that the risks posed by dial-up communications are much higher, which is why so many entities have already mitigated that risk by not allowing dial-up at all).
  7. In item 4 on page 14, Lew says “In my opinion, identification of malicious remote access sessions and disabling of such access should be achieved in seconds or minutes, not hours or days.” This might seem like a requirement to write blank checks to the current preferred suppliers of whiz-bang devices that purport to do this, but again you need to keep in mind that this all depends on the risk posed by the communications. For example, if we’re talking about a high-voltage substation that feeds all of downtown Manhattan, this might be the right approach. But if we’re talking about a relatively low-voltage substation serving mostly rural areas outside of Manhattan, Kansas, it might be overkill.
  8. Lew points out in item 5 on page 14 that, since the word “vendor” is undefined in R2.5, which says “Have one or more method(s) to disable active vendor remote access…”, you are better off not trying to split hairs on the definition, but simply apply the same controls to all communications into and out of an ESP at a Medium or High impact asset.
  9. Lew’s item 6 on page 14 contains some very good observations on incident response plans, for the case where you do detect some suspected malicious communications into an in-scope ESP.
  10. My final observation (besides to say that you need to read the whole article, and carefully!) is on the paragraph titled “Response Planning” at the bottom of page 17. There, Lew makes the excellent point that your response to detected improper access to Medium or High impact BCS should involve “manually-initiated automated processes”. I’ll let you read the details, but this is really good advice, and might be a good principle for securing ICS in general.
  
Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Sunday, June 16, 2019

Stumbling toward Armageddon



“Those whom the gods wish to destroy, they first make mad.”
- ancient Greek saying, wrongly attributed to Aeschylus

The New York Times ran a story on Saturday that starts with the sentence “The United States is stepping up digital incursions into Russia’s electric power grid in a warning to President Vladimir V. Putin and a demonstration of how the Trump administration is using new authorities to deploy cybertools more aggressively, current and former government officials said.”

The story goes on to discuss how government cyber warriors have implanted malware in the Russian grid – and for that to be able to do any damage, it has to be on control networks. Of course, there have been multiple reports that the Russians have planted malware in US control networks, including:

  1. The 2019 Worldwide Threat Assessment, prepared by the Director of National Intelligence, the FBI and the CIA. While the WTA doesn’t directly say the Russians are in electric utility control networks, it does say they’re in a position to cause multiple outages at will, which means the same thing;
  2. Vikram Thakur of Symantec, quoted in a January article in the Wall Street Journal, where he said that at least eight utilities had been penetrated at the control network level; and
  3. Chris Inglis, formerly deputy director of the NSA, who said recently that over 200,000 “implants” (the same word used in the NYT article, meaning malware) had been planted in water, natural gas and electric power infrastructure (presumably at least some of those in control networks of electric utilities).

To be honest, I would have been surprised to hear that the US wasn’t doing this to the Russians, and I’m glad they are. But the question is: What is the purpose of doing this?

Of course, there’s a very obvious reason for planting malware in the Russian grid (discussed at length in the article): Since the Russians have malware in our grid and could cause outages whenever they want (if the CIA and FBI are to be believed), the knowledge that we’ve done the same thing to them will make them much more hesitant to pull the cyber trigger on us. So we’re protecting ourselves, just as our huge nuclear missile and bomber fleets have so far protected us from nuclear attack from Russia (and previously the Soviet Union), under the Mutually Assured Destruction principle, also known as MAD.

But there’s a big difference between nukes and cyberattacks. A nuclear attack on anywhere in the US, even the middle of some desert, is totally unthinkable. There’s literally no risk we will accept that would open up the possibility – even if very very very small - of this happening. This means we will absolutely never launch a first strike against Russia, since we could wipe out the whole country, but their nuclear submarines would still survive to destroy most of the US. And the Russians will never launch a first strike against the US, for the same reason (of course, a dictator who doesn’t care about his countrymen might do that, if cornered. Kim Jong-un comes to mind here).

But a cyberattack isn’t at all unthinkable. A lot of these have been launched by state actors (including us, of course); the NotPetya attack caused $10 billion in damage worldwide (question 1: Who was responsible for that? Answer: Without a doubt, Russia. Question 2: Has Russia been in any way held responsible for this, or is there even any likelihood that they will be? Answer: You gotta be kidding! The same goes for the Malaysian airliner that was downed over the Ukraine in 2014, the cyberattacks on the Ukraine, etc. Do you notice a pattern here?) – yet we’re still able to go to the grocery store, write blog posts, etc. Of course, some people will die in a cyberattack on either our or Russia’s grid, but tragedies happen every day. Neither the US nor Russia considers a grid cyberattack to be an unthinkable event.

Of course, the Russians certainly wouldn’t launch a cyberattack on the US grid willy-nilly. But they might do it due to some provocation, such as our killing Russian troops in Syria. When they were attacked, US forces killed about 100 Russian mercenaries in Syria last year, but the courageous Mr. Putin pretended that they weren’t really Russians and didn’t take any measures against us (way to stand behind your guys, Vlad!). If it had been regular Russian troops or say airmen, Putin would have felt compelled to respond in some way. Or perhaps if the US directly aided a new offensive by the Ukrainians to drive the Russians and their allies out of the Donbass region of the Ukraine – again, Putin would probably feel compelled to respond in some serious way, like causing blackouts in the US.

So let’s say the Russians black out a few major cities, although probably just for a few hours. What will we do then? The article makes it pretty obvious that we’ll probably launch a similar attack against the Russians. And given that their grid is less redundant and resilient than ours (and we’ll naturally want to cause more damage than they caused to us), it will probably be more destructive and kill more people. So what’s Russia going to do then? I’d say there’s a pretty good chance they’ll strike back. They might launch a broader cyberattack, perhaps hitting water and/or natural gas pipelines (although I find it hard to believe that a cyberattack alone could cause a serious natural gas disruption. However, water supplies are a bigger concern). And since real Russian civilians would presumably have been killed by our retaliatory cyber strike, they might even launch some sort of very limited military attack, which would kill even more US civilians and military personnel.

I think you see where this is going: Once the conflict moves into the military phase, it becomes very possible that a “limited” nuclear strike will be launched, perhaps on a US military base overseas, so it doesn’t kill a lot of US civilians. But then we launch a bigger nuke strike, and sooner or later we have a full nuclear exchange and that’s the end of civilization.

Of course, hopefully cooler heads will prevail and someone will step in and talk some sense into both participants before the confrontation goes that far. But that’s not enough. Sometimes, during a period of high tension, the word to stand down might not get through to every officer with nuclear weapons under his command. One guy thinks he’s doing the right thing, presses the button, and…

Which brings me to a good true story – events before and during the Cuban Missile Crisis in 1962. What set the crisis off was probably the US invasion of Cuba at the Bay of Pigs in 1961, along with the US installation at around that time of Jupiter nuclear-armed missiles in Italy and Turkey, aimed at the Soviet Union. The Soviets decided to retaliate by installing similar missiles in Cuba, where they were detected by a U-2 spy plane. President Kennedy then escalated the conflict by declaring a naval blockade of Cuba. But even though the Soviets moved a number of vessels into the waters around Cuba, there was no conflict. The Soviets backed down and removed the missiles, followed by the US removing the Jupiter missiles from Turkey and Italy.  Nobody dead, not a shot fired in anger. Seems to be a textbook case on how a well-controlled (on both sides) military confrontation can produce a satisfactory result for both parties, right?

No, that’s not right. The full story came out after the fall of the Soviet Union, showing how close the world came to Armageddon. It seems one of the submarines that the Soviet Union deployed to Cuba during the crisis was the target of depth charges foolishly dropped by a US Navy vessel, which was trying to get the sub to surface so it could be identified. The problem is that this sub had lost all communications with the outside world because of its depth, and the two commanding officers (one was actually the political officer that traveled on all Soviet naval vessels) reasonably believed that open war had broken out, and the depth charges were meant to destroy them.

This sub had nuclear torpedoes on board (only the US had submarine-based nuclear ballistic missiles at the time), and the two commanding officers decided to use one of them to sink the US ship that was dropping the depth charges. The Soviet navy’s protocol for using these weapons was that it required consent of both commanding officers, which would normally have meant the sub would have fired the missile. However, it happened that the commander of the flotilla also happened to be on the submarine (this wasn’t normal), and because of this, his approval also was required. This man, Vice Admiral Vasily Arkhipov, wouldn’t approve of the launch; the sub surfaced and was recalled to the Soviet Union.

It turns out, it was very fortuitous for the human race that Arkhipov was on the sub at the right time. As it happened, there was a cabal of hotheaded generals at the Pentagon (including Gen. Curtis LeMay, who later would run for Vice President with George Wallace, and was reported to have advocated nuking Vietnam). For them, the nuclear strike on the US ship would have been like manna from heaven, because it was an excuse to do what they had been advocating anyway: launch a first strike on the USSR before the Soviets were able to deploy the overwhelming number of ICBMs that the US already had in place (although the Soviets still had lots of nuclear bombers and the US would probably never have been able to block them all). They would have blamed Russia for the first nuclear strike and used that excuse to launch their attack – which of course would have been followed by retaliation from Russian bombers that were in the air at all times during the crisis, as well as any land-based missiles that weren’t destroyed by the US strike. So even though the US might have technically survived, even just a few nuclear strikes on key cities would have made it a hollow “victory” indeed (remember, the bombs would have been thermonuclear, vastly more powerful than the bombs that destroyed Hiroshima and Nagasaki). And of course the fallout would have killed many more people, in the US and the Soviet Union as well as in adjacent countries.

The moral of the story? I suppose it’s good clean fun to deploy a bunch of malware on the Russian grid, to match (and maybe more than match) the malware the Russians have planted on ours. But actually retaliating against a grid attack with a grid attack of our own could very well lead into the military realm, which could then easily lead into the nuclear realm. And even though the controls on nuclear weapons are supposed to contain their use until the president has made the decision to use them, there can never be 100% certainty that those controls will hold.

If the Russians actually do bring down part of our grid, instead of retaliating in kind, we should turn to tools like sanctions, which seem to have caused a lot of real pain for Mr. P and his cronies. The only problem with the sanctions on Russia so far is that they haven’t been deployed at anywhere near the level they should be. For example, once it became clear that the Russians were responsible for shooting down Malaysia Airlines flight 17 (and a Russian parliament member admitted that about two weeks after the incident), I think Russian planes should have been banned from all international airspace until the Russians had admitted their involvement and paid full reparations to all of the 300 victims’ families, as well as to the Netherlands and other countries who lost their nationals or were otherwise affected. Of course, five years later the Russians have paid exactly $0.00, and I know of no current action to change that situation.

If instead of sanctions, we retaliate against a Russian grid cyberattack on the US with a similar or greater attack on the Russian grid, we can be sure the Russians will retaliate for that, then we’ll retaliate for that strike, etc. This will likely escalate to military retaliation and then, even though the US and Russian leaders will hopefully behave responsibly, we’ll just have to pray that no US or Russian general or admiral anywhere in the world, on the land, sea or air, will become confused at the heat of the crisis and do something they shouldn’t do. But what could possibly go wrong?


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Thursday, June 13, 2019

Vendors vs. Suppliers



In the NERC CIPC Supply Chain Working Group, we’ve had an ongoing discussion about vendors vs. suppliers. It started when someone from a prominent power industry supplier pointed out that his CEO hated it when his company was called a vendor. “Vendors sell hot dogs” was his CEO’s memorable statement. The company now officially refers to itself as a supplier. This person requested that the SCWG stop using the term vendor and refer to any provider of BES Cyber Systems or services as a supplier.

However, he didn’t get very far in his argument. The first time we had the discussion, I pointed out that, while “vendor” isn’t a defined term in the NERC Glossary, it is used everywhere in CIP-013, and definitely in the majority of articles and papers written about CIP-013, to designate entities that provide BCS components to NERC entities. It would be very hard for the SCWG to single-handedly try to change this.[i]

At the SCWG’s meeting last week, held the day before the CIPC meeting started in Orlando, the subject came up again. Once again, there was very little support for trying to change the usage (and in response to the same statement that vendors sell hot dogs, I pointed out – a little meanly, I now admit – that I thought of crack dealers as suppliers, not vendors).

However, during that discussion, my good friend John Work of the Western Area Power Administration emailed me (he was listening to the webcast of the discussion, being unable to get to Orlando for it) that there was a good case to be made for using both terms: A supplier is the manufacturer of the hardware or the developer of the software, while the vendor is the organization that sells it.

I admit that at first I didn’t see that this distinction was going to make a difference, but this week, in the course of working with a client on their CIP-013 methodology, I began to realize it can be very helpful to use both terms, in the way John suggested.

The reason for this becomes evident when you think about having a contract with a vendor (in the original all-inclusive sense), and inserting cybersecurity provisions into it. If the organization both develops and sells the product, in my opinion it doesn’t matter what you call them – your contract is with the organization.

But what about the case where one organization manufactures or develops the product and another sells it? Big companies like Cisco, Microsoft and SEL develop or manufacture their products, but they don’t sell them; a dealer does that. So while your organization certainly has a contract with the dealer, is that really useful as a tool for implementing supply chain cyber risk management mitigations? Of course not; you really would need a contract with Cisco, Microsoft or SEL[ii]. But good luck getting one!

In such cases, using contract language to mitigate risks by requiring certain vendor security practices is simply impossible. You will need to admit that, while all of these companies have very good security practices now (and in fact Edna Conway of Cisco almost single-handedly invented the field of supply chain cyber security risk management), and they will all certainly try to address the concerns of the power industry with specific position papers describing their controls, in the end there is no utility that’s big enough to force one of these companies to do something it really doesn’t want to do. Ultimately, if a utility thinks that, for example, Cisco’s cyber risk controls are woefully insufficient for mitigating the risks that Cisco faces, it will need to either find another vendor or simply accept these risks (which BTW is allowed in CIP-013, while it’s still strictly verboten in the other CIP standards). There’s no other choice.

And now you may see what I just saw in the last day or two: It can really help if you distinguish suppliers from vendors. If your contract is with the vendor (e.g. the Cisco dealer), it simply isn’t going to be a vehicle for mitigating much supply chain cyber risk, although it’s certainly necessary for other reasons. It’s only if your contract is with the supplier (or if the supplier is also the vendor) that you will be able to use it to mitigate a lot of supply chain cyber risk.

Of course, whether you distinguish vendors from suppliers, whether you call them all vendors (as CIP-013 does now), or whether you call them all suppliers (as the CEO of the industry supplier wanted. And BTW, that company sells direct to utilities, so they are both a supplier and a vendor in John’s nomenclature), there is no compliance implication. You can call them anything you want in your compliance documentation, since “vendor” isn’t a NERC Glossary term. But I think it’s very helpful to distinguish vendors from suppliers as you develop your supply chain cyber risk management plan, because you will probably need to take different actions in the two cases.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.


[i] The CIP-013 drafting team originally drafted a definition, but they withdrew this after it – I assume – received a lot of negative comments on the first ballot. After that, the term was defined in one of the notorious blue boxes in the later CIP-013 versions. A lot of people had the impression that these boxes were actually part of the standard (and it may have influenced how they voted), since they were found in the middle of the requirements. However, NERC – which was at the time going through one of its periodic identity crises, this time regarding what kinds of “guidance” they could provide – ended up declaring that the items in the blue boxes weren’t officially part of the standard. They were moved to another area (not officially a part of the standard) when CIP-013 was approved by NERC and submitted to FERC for their approval.

[ii] Even a pure dealer will be the subject of some supply chain risk – for example, they may not take proper measures to secure the product before shipment, and it could be tampered with en route; that risk needs to be mitigated, using contract language or another means. And if the dealer also installs the product, there’s a lot of risk to mitigate (which of course is also covered by CIP-013, since risks of installation need to be mitigated just as much as pure procurement risks).


Friday, June 7, 2019

Good ideas on CIP-013 compliance from the SCWG head



On Tuesday morning in Orlando, the Supply Chain Working group of the NERC CIP Committee presented a “training” on supply chain cyber security risk management and CIP-013 compliance for people attending the CIPC meeting. The meeting included presentations of five draft white papers (all short!) on different aspects of supply chain security risk management.

Each paper was developed by a subgroup of the SCWG, and they were presented on Tuesday by the leaders of those subgroups. The papers were all the product of a lot of work by SCWG members, and they were all very good – including, as I say in all modesty (or at least as much as I’m capable of. Modesty has never been my strong suit), two papers for which I led the development, Supply Chain Cyber Security Risk Management Lifecycle and Vendor Risk Management Lifecycle. The topics of the other three papers are Provenance, Open Source Software, and Secure Equipment Delivery.

The papers will be reviewed and (hopefully) approved by the CIPC in the near future, at which point they’ll be posted on NERC’s SCRM/CIP-013 portal, which I recommend you bookmark (it happens to be one very useful and very easy-to-access feature of the NERC website. In general, I’ve always thought of the NERC website as a great place to hide critical infrastructure information – al Qaeda and ISIS will never find it there! – but maybe I’ll need to revise that opinion).

The SCWG is working on three other white papers, on Cloud Service Provider risks, Threat-Informed Procurement Language, and Vendor Identified Incident Response Measures. These aren’t finished yet, but they will also be posted on the portal. And it’s possible that the SCWG will continue to develop other white papers, since in the process of developing these white papers, we identified lots of other topics that should be addressed (for example, in developing the Vendor Risk Management Lifecycle paper, my group identified at least 5-10 other white papers that would be helpful for NERC entities as they develop their CIP-013 plans). The papers were limited in length (three pages was the target, although both of mine spilled onto a fourth page), so they all needed to focus on one idea, and one idea only. But it was felt (and I agree with this), that if the papers went beyond what could be digested in one sitting, people wouldn’t read them. Better to have lots of short papers that people read, rather than a few long ones that they don’t read.

However, the SCWG will post some items to the NERC portal before the papers are approved (hopefully within a couple of weeks), including the slide decks used by the five presenters on Tuesday; a recording of the entire set of presentations (along with the slides); and the slides used by Tony Eddleman, the head of the SCWG (I think his title is Chairman. I heard that he would have preferred Grand Exalted Leader, but I believe NERC shot that down J), as he discussed CIP-013 compliance.

I didn’t take notes on Tony’s entire presentation, so I don’t want to try to summarize it, except to say that this was by far the best presentation on CIP-013 that I’ve seen yet from anybody associated with the NERC ERO (and Tony of course isn’t a NERC staff member. His day job is NERC Compliance Manager for the Nebraska Public Power District). Since I didn’t take enough notes to produce a summary of what he said, I’ll just list the points that struck me as very interesting (and I’ll admit that I filled out a few of Tony’s ideas with related ideas of my own. If you want to hear exactly what Tony said, listen to the recording!).

  • He emphasized that CIP-013 R1.1 is a real requirement. Many people have just focused on the six items in R1.2, and consider those the totality of what CIP-013 “requires” an entity to do. But as I’ve said many times before, while those six items all constitute very important controls that address serious supply chain security risks to the BES, they are only listed because FERC mentioned, at different places in Order 829 in 2016, that they wanted them included in the new standard. So the drafting team dutifully included them.
  • R1.1 requires the NERC entity to develop a supply chain cyber security risk management plan. To develop that plan, the entity needs to identify the supply chain security risks that it considers most important, as well as identify mitigations for those (although the drafting team left out the word “mitigate” in R1.1, there’s no doubt that they need to mitigate risks, not just identify them and call it a day).
  • Because each entity will identify different risks as important to it, the plans will definitely differ from entity to entity (although they will undoubtedly have many common elements).
  • If your entity is compliant with another standard that addresses supply chain security (e.g. NIST 800-53), you still need to develop an R1.1 plan. Similarly, if a vendor tells you they have a certification like ISO 27001, you still need to make sure the certification addresses the particular risks that you think are important. Different certifications address different risks, and of course there is currently no certification that specifically addresses supply chain cybersecurity risks to the BES (although Howard Gugel of NERC said at the CIPC meeting that afternoon that NERC will work with EPRI to develop one).
  • NERC entities don’t need to have an RFP on the table, in order to ask security questions of vendors. They should do it when needed, although of course they shouldn’t ask the questions so often that they impose a significant additional cost on the vendor – that they presumably never anticipated when they quoted their prices, in an RFP response or otherwise. There is more discussion of this topic in the Vendor Risk Management Lifecycle paper that I presented. Anyone wishing to see the draft that I discussed on Tuesday should email me at the address below.
  • While contract language is usually a good way to mitigate risks posed by vendors, this isn’t an option for purchases made by credit card, purchases through dealers (as is often or usually the case with large suppliers like Cisco and Microsoft), or purchases made using a standard PO. Tony’s point was that the vendor risks, which the entity might otherwise mitigate through contract language, need to be mitigated in any case. The Vendor Risk Management Lifecycle white paper discusses other ways to document that a vendor has committed to do something, besides security language. And that paper also briefly mentions other steps the NERC entity can take on its own, if the vendor simply refuses to cooperate, but is too important to terminate. The risks identified in the entity’s supply chain cyber security risk management plan need to be mitigated, regardless of whether or not the vendor cooperates – although they can usually be more effectively mitigated if the vendor does cooperate.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.