Thursday, June 20, 2019

Lew nails another one!

I was quite excited when I got RF’s latest newsletter yesterday, since I knew that Lew Folkerth was going to have his (alleged) last article on CIP-013 in it. I read it thoroughly this morning, and I have to say this is the best of his articles on CIP-013 so far. What’s also good news is that, in writing the article, he realized he was going to have to separate it into two (this has happened to me a number of times also).

In this article, Lew planned to write about CIP-005 R1.4 and R1.5, and CIP-010 R1.6 – which are three new requirement parts that will become enforceable on July 1, 2020, the same day as CIP-013-1 becomes enforceable. But he ended up devoting the entire article to just the CIP-005 parts; the CIP-010 part will be the subject of the next one. He implies this will be the last of his CIP-013/supply chain risk management articles, but I certainly hope it won’t be – there’s a lot more that can be said about this.

You can find the article by going to RF’s home page and finding the May-June 2019 newsletter, then clicking on “The Lighthouse” in the table of contents. However, Lew gave the article by itself to me in PDF format, and I’ll be glad to send that to you if you want to drop me an email at the address below. In fact, for a limited time, Lew has made available a single PDF including all four of his CIP-013 articles so far, and he is offering that for the same price as this article by itself - $0.00 (US)! You should act now to take advantage of this incredible offer! No salesman will call…

As I started reading the article, I was struck by one point after another that I hadn’t realized, and I started writing them all down. After writing down about 4 or 5, I realized I might as well just let you read the paper to get the rest, but here are the most important takeaways I got out of Lew’s article:

  1. I admit that I had a fairly low opinion of the new requirement parts that the CIP-013 drafting team added to CIP-005-6 R2 and CIP-010-3 R1. This was because I thought of these as new prescriptive requirements that would conflict with the risk-based, objectives-based nature of CIP-013. I explained that in this post (and the earlier one linked in it). But reading Lew’s article made it very clear to me that CIP-005-6 R2.4 and R2.5 are objectives-based, since they give you an objective to achieve, and don’t specify how you need to achieve it.
  2. However, are these two requirement parts also risk-based? My answer to that is yes, even though the word “risk” isn’t to be found anywhere in them. In determining how you will comply with the two requirement parts (that is, what technologies or procedures you should deploy), you certainly should take risk into account. You should never put a $50 lock on a $10 bike, and you shouldn’t, for example, implement and tune a layer three firewall just for compliance with R2.5, unless the likelihood and/or impact of a compromised remote access session being allowed to continue are significant.
  3. A good indication that these parts are risk-based is found on page 15, where Lew discusses how you can categorize, classify and prioritize different types of communications traffic – and, of course, apply different controls to different traffic depending on its category, classification and priority. You simply can’t do this in a prescriptive requirement like CIP-007 R2 or CIP-010 R1, where you have to do the same things to every component of a Medium or High BES Cyber System, or risk getting your head cut off.
  4. Lew’s item 2 on page 15 points out that these requirement parts include undefined terms, which I take to include “determine”, “system-to-system”, “disable”, and – the most glaring omission of all in CIP-013 - “vendor”. Of course, these are hardly the first cases of undefined terms in the CIP standards. CIP v5 included many undefined terms, including “generation”, “station” and “substation” in CIP-002 R1 Attachment 1, as well as terms like “programmable” (in the Cyber Asset definition), “affect the reliable operation of the BES” (in the BES Cyber Asset definition), “adverse impact” (in the BES Cyber Asset definition), “associated with” (in CIP-002 R1 Attachment 1), “security patch” (in CIP-007 R2), and “custom software” (in CIP-010 R1); so this is hardly a new development. I certainly understand there are very good reasons why all of these terms were left undefined, but on the other hand some people at NERC and elsewhere talk about the CIP standards as if they were clear as day, so that if entities are confused and get violations, it must be because they just aren’t trying hard enough. This is far from being true.
  5. Lew’s point about these undefined terms in CIP-005-6 R1.4 and R1.5 is exactly the point he made in 2014, as it began to become clear that there would never be clear answers to most of the ambiguities, implicit requirements and undefined terms in CIP v5: It’s up to the entity to a) look at all the available guidelines, guidance, etc. and then b) determine and document for themselves how they are interpreting these questions and definitions (although in his article he suggests that the entity should “use commonly accepted definitions”. It would be nice if there were commonly accepted definitions for these terms, but in most cases there simply is none).
  6. In item 3 on page 14, Lew notes that all data communications, including dial-up and serial, need to be included in the scope of CIP-005-6 R2.4 and R2.5 – not just routable communications. This might seem to some people to constitute a huge increase in scope in itself, but keep in mind that the fact that these are risk-based requirements is your friend. Given that there has never been a successful cyberattack over serial communications, the likelihood of such attacks is extremely low, meaning the risk is extremely low – so you just don’t have to make the same effort to remediate risks in serial communications as you do for routable communications (I’ll admit that the risks posed by dial-up communications are much higher, which is why so many entities have already mitigated that risk by not allowing dial-up at all).
  7. In item 4 on page 14, Lew says “In my opinion, identification of malicious remote access sessions and disabling of such access should be achieved in seconds or minutes, not hours or days.” This might seem like a requirement to write blank checks to the current preferred suppliers of whiz-bang devices that purport to do this, but again you need to keep in mind that this all depends on the risk posed by the communications. For example, if we’re talking about a high-voltage substation that feeds all of downtown Manhattan, this might be the right approach. But if we’re talking about a relatively low-voltage substation serving mostly rural areas outside of Manhattan, Kansas, it might be overkill.
  8. Lew points out in item 5 on page 14 that, since the word “vendor” is undefined in R2.5, which says “Have one or more method(s) to disable active vendor remote access…”, you are better off not trying to split hairs on the definition, but simply apply the same controls to all communications into and out of an ESP at a Medium or High impact asset.
  9. Lew’s item 6 on page 14 contains some very good observations on incident response plans, for the case where you do detect some suspected malicious communications into an in-scope ESP.
  10. My final observation (besides to say that you need to read the whole article, and carefully!) is on the paragraph titled “Response Planning” at the bottom of page 17. There, Lew makes the excellent point that your response to detected improper access to Medium or High impact BCS should involve “manually-initiated automated processes”. I’ll let you read the details, but this is really good advice, and might be a good principle for securing ICS in general.
Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Sunday, June 16, 2019

Stumbling toward Armageddon

“Those whom the gods wish to destroy, they first make mad.”
- ancient Greek saying, wrongly attributed to Aeschylus

The New York Times ran a story on Saturday that starts with the sentence “The United States is stepping up digital incursions into Russia’s electric power grid in a warning to President Vladimir V. Putin and a demonstration of how the Trump administration is using new authorities to deploy cybertools more aggressively, current and former government officials said.”

The story goes on to discuss how government cyber warriors have implanted malware in the Russian grid – and for that to be able to do any damage, it has to be on control networks. Of course, there have been multiple reports that the Russians have planted malware in US control networks, including:

  1. The 2019 Worldwide Threat Assessment, prepared by the Director of National Intelligence, the FBI and the CIA. While the WTA doesn’t directly say the Russians are in electric utility control networks, it does say they’re in a position to cause multiple outages at will, which means the same thing;
  2. Vikram Thakur of Symantec, quoted in a January article in the Wall Street Journal, where he said that at least eight utilities had been penetrated at the control network level; and
  3. Chris Inglis, formerly deputy director of the NSA, who said recently that over 200,000 “implants” (the same word used in the NYT article, meaning malware) had been planted in water, natural gas and electric power infrastructure (presumably at least some of those in control networks of electric utilities).

To be honest, I would have been surprised to hear that the US wasn’t doing this to the Russians, and I’m glad they are. But the question is: What is the purpose of doing this?

Of course, there’s a very obvious reason for planting malware in the Russian grid (discussed at length in the article): Since the Russians have malware in our grid and could cause outages whenever they want (if the CIA and FBI are to be believed), the knowledge that we’ve done the same thing to them will make them much more hesitant to pull the cyber trigger on us. So we’re protecting ourselves, just as our huge nuclear missile and bomber fleets have so far protected us from nuclear attack from Russia (and previously the Soviet Union), under the Mutually Assured Destruction principle, also known as MAD.

But there’s a big difference between nukes and cyberattacks. A nuclear attack on anywhere in the US, even the middle of some desert, is totally unthinkable. There’s literally no risk we will accept that would open up the possibility – even if very very very small - of this happening. This means we will absolutely never launch a first strike against Russia, since we could wipe out the whole country, but their nuclear submarines would still survive to destroy most of the US. And the Russians will never launch a first strike against the US, for the same reason (of course, a dictator who doesn’t care about his countrymen might do that, if cornered. Kim Jong-un comes to mind here).

But a cyberattack isn’t at all unthinkable. A lot of these have been launched by state actors (including us, of course); the NotPetya attack caused $10 billion in damage worldwide (question 1: Who was responsible for that? Answer: Without a doubt, Russia. Question 2: Has Russia been in any way held responsible for this, or is there even any likelihood that they will be? Answer: You gotta be kidding! The same goes for the Malaysian airliner that was downed over the Ukraine in 2014, the cyberattacks on the Ukraine, etc. Do you notice a pattern here?) – yet we’re still able to go to the grocery store, write blog posts, etc. Of course, some people will die in a cyberattack on either our or Russia’s grid, but tragedies happen every day. Neither the US nor Russia considers a grid cyberattack to be an unthinkable event.

Of course, the Russians certainly wouldn’t launch a cyberattack on the US grid willy-nilly. But they might do it due to some provocation, such as our killing Russian troops in Syria. When they were attacked, US forces killed about 100 Russian mercenaries in Syria last year, but the courageous Mr. Putin pretended that they weren’t really Russians and didn’t take any measures against us (way to stand behind your guys, Vlad!). If it had been regular Russian troops or say airmen, Putin would have felt compelled to respond in some way. Or perhaps if the US directly aided a new offensive by the Ukrainians to drive the Russians and their allies out of the Donbass region of the Ukraine – again, Putin would probably feel compelled to respond in some serious way, like causing blackouts in the US.

So let’s say the Russians black out a few major cities, although probably just for a few hours. What will we do then? The article makes it pretty obvious that we’ll probably launch a similar attack against the Russians. And given that their grid is less redundant and resilient than ours (and we’ll naturally want to cause more damage than they caused to us), it will probably be more destructive and kill more people. So what’s Russia going to do then? I’d say there’s a pretty good chance they’ll strike back. They might launch a broader cyberattack, perhaps hitting water and/or natural gas pipelines (although I find it hard to believe that a cyberattack alone could cause a serious natural gas disruption. However, water supplies are a bigger concern). And since real Russian civilians would presumably have been killed by our retaliatory cyber strike, they might even launch some sort of very limited military attack, which would kill even more US civilians and military personnel.

I think you see where this is going: Once the conflict moves into the military phase, it becomes very possible that a “limited” nuclear strike will be launched, perhaps on a US military base overseas, so it doesn’t kill a lot of US civilians. But then we launch a bigger nuke strike, and sooner or later we have a full nuclear exchange and that’s the end of civilization.

Of course, hopefully cooler heads will prevail and someone will step in and talk some sense into both participants before the confrontation goes that far. But that’s not enough. Sometimes, during a period of high tension, the word to stand down might not get through to every officer with nuclear weapons under his command. One guy thinks he’s doing the right thing, presses the button, and…

Which brings me to a good true story – events before and during the Cuban Missile Crisis in 1962. What set the crisis off was probably the US invasion of Cuba at the Bay of Pigs in 1961, along with the US installation at around that time of Jupiter nuclear-armed missiles in Italy and Turkey, aimed at the Soviet Union. The Soviets decided to retaliate by installing similar missiles in Cuba, where they were detected by a U-2 spy plane. President Kennedy then escalated the conflict by declaring a naval blockade of Cuba. But even though the Soviets moved a number of vessels into the waters around Cuba, there was no conflict. The Soviets backed down and removed the missiles, followed by the US removing the Jupiter missiles from Turkey and Italy.  Nobody dead, not a shot fired in anger. Seems to be a textbook case on how a well-controlled (on both sides) military confrontation can produce a satisfactory result for both parties, right?

No, that’s not right. The full story came out after the fall of the Soviet Union, showing how close the world came to Armageddon. It seems one of the submarines that the Soviet Union deployed to Cuba during the crisis was the target of depth charges foolishly dropped by a US Navy vessel, which was trying to get the sub to surface so it could be identified. The problem is that this sub had lost all communications with the outside world because of its depth, and the two commanding officers (one was actually the political officer that traveled on all Soviet naval vessels) reasonably believed that open war had broken out, and the depth charges were meant to destroy them.

This sub had nuclear torpedoes on board (only the US had submarine-based nuclear ballistic missiles at the time), and the two commanding officers decided to use one of them to sink the US ship that was dropping the depth charges. The Soviet navy’s protocol for using these weapons was that it required consent of both commanding officers, which would normally have meant the sub would have fired the missile. However, it happened that the commander of the flotilla also happened to be on the submarine (this wasn’t normal), and because of this, his approval also was required. This man, Vice Admiral Vasily Arkhipov, wouldn’t approve of the launch; the sub surfaced and was recalled to the Soviet Union.

It turns out, it was very fortuitous for the human race that Arkhipov was on the sub at the right time. As it happened, there was a cabal of hotheaded generals at the Pentagon (including Gen. Curtis LeMay, who later would run for Vice President with George Wallace, and was reported to have advocated nuking Vietnam). For them, the nuclear strike on the US ship would have been like manna from heaven, because it was an excuse to do what they had been advocating anyway: launch a first strike on the USSR before the Soviets were able to deploy the overwhelming number of ICBMs that the US already had in place (although the Soviets still had lots of nuclear bombers and the US would probably never have been able to block them all). They would have blamed Russia for the first nuclear strike and used that excuse to launch their attack – which of course would have been followed by retaliation from Russian bombers that were in the air at all times during the crisis, as well as any land-based missiles that weren’t destroyed by the US strike. So even though the US might have technically survived, even just a few nuclear strikes on key cities would have made it a hollow “victory” indeed (remember, the bombs would have been thermonuclear, vastly more powerful than the bombs that destroyed Hiroshima and Nagasaki). And of course the fallout would have killed many more people, in the US and the Soviet Union as well as in adjacent countries.

The moral of the story? I suppose it’s good clean fun to deploy a bunch of malware on the Russian grid, to match (and maybe more than match) the malware the Russians have planted on ours. But actually retaliating against a grid attack with a grid attack of our own could very well lead into the military realm, which could then easily lead into the nuclear realm. And even though the controls on nuclear weapons are supposed to contain their use until the president has made the decision to use them, there can never be 100% certainty that those controls will hold.

If the Russians actually do bring down part of our grid, instead of retaliating in kind, we should turn to tools like sanctions, which seem to have caused a lot of real pain for Mr. P and his cronies. The only problem with the sanctions on Russia so far is that they haven’t been deployed at anywhere near the level they should be. For example, once it became clear that the Russians were responsible for shooting down Malaysia Airlines flight 17 (and a Russian parliament member admitted that about two weeks after the incident), I think Russian planes should have been banned from all international airspace until the Russians had admitted their involvement and paid full reparations to all of the 300 victims’ families, as well as to the Netherlands and other countries who lost their nationals or were otherwise affected. Of course, five years later the Russians have paid exactly $0.00, and I know of no current action to change that situation.

If instead of sanctions, we retaliate against a Russian grid cyberattack on the US with a similar or greater attack on the Russian grid, we can be sure the Russians will retaliate for that, then we’ll retaliate for that strike, etc. This will likely escalate to military retaliation and then, even though the US and Russian leaders will hopefully behave responsibly, we’ll just have to pray that no US or Russian general or admiral anywhere in the world, on the land, sea or air, will become confused at the heat of the crisis and do something they shouldn’t do. But what could possibly go wrong?

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at

Thursday, June 13, 2019

Vendors vs. Suppliers

In the NERC CIPC Supply Chain Working Group, we’ve had an ongoing discussion about vendors vs. suppliers. It started when someone from a prominent power industry supplier pointed out that his CEO hated it when his company was called a vendor. “Vendors sell hot dogs” was his CEO’s memorable statement. The company now officially refers to itself as a supplier. This person requested that the SCWG stop using the term vendor and refer to any provider of BES Cyber Systems or services as a supplier.

However, he didn’t get very far in his argument. The first time we had the discussion, I pointed out that, while “vendor” isn’t a defined term in the NERC Glossary, it is used everywhere in CIP-013, and definitely in the majority of articles and papers written about CIP-013, to designate entities that provide BCS components to NERC entities. It would be very hard for the SCWG to single-handedly try to change this.[i]

At the SCWG’s meeting last week, held the day before the CIPC meeting started in Orlando, the subject came up again. Once again, there was very little support for trying to change the usage (and in response to the same statement that vendors sell hot dogs, I pointed out – a little meanly, I now admit – that I thought of crack dealers as suppliers, not vendors).

However, during that discussion, my good friend John Work of the Western Area Power Authority emailed me (he was listening to the webcast of the discussion, being unable to get to Orlando for it) that there was a good case to be made for using both terms: A supplier is the manufacturer of the hardware or the developer of the software, while the vendor is the organization that sells it.

I admit that at first I didn’t see that this distinction was going to make a difference, but this week, in the course of working with a client on their CIP-013 methodology, I began to realize it can be very helpful to use both terms, in the way John suggested.

The reason for this becomes evident when you think about having a contract with a vendor (in the original all-inclusive sense), and inserting cybersecurity provisions into it. If the organization both develops and sells the product, in my opinion it doesn’t matter what you call them – your contract is with the organization.

But what about the case where one organization manufactures or develops the product and another sells it? Big companies like Cisco, Microsoft and SEL develop or manufacture their products, but they don’t sell them; a dealer does that. So while your organization certainly has a contract with the dealer, is that really useful as a tool for implementing supply chain cyber risk management mitigations? Of course not; you really would need a contract with Cisco, Microsoft or SEL[ii]. But good luck getting one!

In such cases, using contract language to mitigate risks by requiring certain vendor security practices is simply impossible. You will need to admit that, while all of these companies have very good security practices now (and in fact Edna Conway of Cisco almost single-handedly invented the field of supply chain cyber security risk management), and they will all certainly try to address the concerns of the power industry with specific position papers describing their controls, in the end there is no utility that’s big enough to force one of these companies to do something it really doesn’t want to do. Ultimately, if a utility thinks that, for example, Cisco’s cyber risk controls are woefully insufficient for mitigating the risks that Cisco faces, it will need to either find another vendor or simply accept these risks (which BTW is allowed in CIP-013, while it’s still strictly verboten in the other CIP standards). There’s no other choice.

And now you may see what I just saw in the last day or two: It can really help if you distinguish suppliers from vendors. If your contract is with the vendor (e.g. the Cisco dealer), it simply isn’t going to be a vehicle for mitigating much supply chain cyber risk, although it’s certainly necessary for other reasons. It’s only if your contract is with the supplier (or if the supplier is also the vendor) that you will be able to use it to mitigate a lot of supply chain cyber risk.

Of course, whether you distinguish vendors from suppliers, whether you call them all vendors (as CIP-013 does now), or whether you call them all suppliers (as the CEO of the industry supplier wanted. And BTW, that company sells direct to utilities, so they are both a supplier and a vendor in John’s nomenclature), there is no compliance implication. You can call them anything you want in your compliance documentation, since “vendor” isn’t a NERC Glossary term. But I think it’s very helpful to distinguish vendors from suppliers as you develop your supply chain cyber risk management plan, because you will probably need to take different actions in the two cases.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

[i] The CIP-013 drafting team originally drafted a definition, but they withdrew this after it – I assume – received a lot of negative comments on the first ballot. After that, the term was defined in one of the notorious blue boxes in the later CIP-013 versions. A lot of people had the impression that these boxes were actually part of the standard (and it may have influenced how they voted), since they were found in the middle of the requirements. However, NERC – which was at the time going through one of its periodic identity crises, this time regarding what kinds of “guidance” they could provide – ended up declaring that the items in the blue boxes weren’t officially part of the standard. They were moved to another area (not officially a part of the standard) when CIP-013 was approved by NERC and submitted to FERC for their approval.

[ii] Even a pure dealer will be the subject of some supply chain risk – for example, they may not take proper measures to secure the product before shipment, and it could be tampered with en route; that risk needs to be mitigated, using contract language or another means. And if the dealer also installs the product, there’s a lot of risk to mitigate (which of course is also covered by CIP-013, since risks of installation need to be mitigated just as much as pure procurement risks).

Friday, June 7, 2019

Good ideas on CIP-013 compliance from the SCWG head

On Tuesday morning in Orlando, the Supply Chain Working group of the NERC CIP Committee presented a “training” on supply chain cyber security risk management and CIP-013 compliance for people attending the CIPC meeting. The meeting included presentations of five draft white papers (all short!) on different aspects of supply chain security risk management.

Each paper was developed by a subgroup of the SCWG, and they were presented on Tuesday by the leaders of those subgroups. The papers were all the product of a lot of work by SCWG members, and they were all very good – including, as I say in all modesty (or at least as much as I’m capable of. Modesty has never been my strong suit), two papers for which I led the development, Supply Chain Cyber Security Risk Management Lifecycle and Vendor Risk Management Lifecycle. The topics of the other three papers are Provenance, Open Source Software, and Secure Equipment Delivery.

The papers will be reviewed and (hopefully) approved by the CIPC in the near future, at which point they’ll be posted on NERC’s SCRM/CIP-013 portal, which I recommend you bookmark (it happens to be one very useful and very easy-to-access feature of the NERC website. In general, I’ve always thought of the NERC website as a great place to hide critical infrastructure information – al Qaeda and ISIS will never find it there! – but maybe I’ll need to revise that opinion).

The SCWG is working on three other white papers, on Cloud Service Provider risks, Threat-Informed Procurement Language, and Vendor Identified Incident Response Measures. These aren’t finished yet, but they will also be posted on the portal. And it’s possible that the SCWG will continue to develop other white papers, since in the process of developing these white papers, we identified lots of other topics that should be addressed (for example, in developing the Vendor Risk Management Lifecycle paper, my group identified at least 5-10 other white papers that would be helpful for NERC entities as they develop their CIP-013 plans). The papers were limited in length (three pages was the target, although both of mine spilled onto a fourth page), so they all needed to focus on one idea, and one idea only. But it was felt (and I agree with this), that if the papers went beyond what could be digested in one sitting, people wouldn’t read them. Better to have lots of short papers that people read, rather than a few long ones that they don’t read.

However, the SCWG will post some items to the NERC portal before the papers are approved (hopefully within a couple of weeks), including the slide decks used by the five presenters on Tuesday; a recording of the entire set of presentations (along with the slides); and the slides used by Tony Eddleman, the head of the SCWG (I think his title is Chairman. I heard that he would have preferred Grand Exalted Leader, but I believe NERC shot that down J), as he discussed CIP-013 compliance.

I didn’t take notes on Tony’s entire presentation, so I don’t want to try to summarize it, except to say that this was by far the best presentation on CIP-013 that I’ve seen yet from anybody associated with the NERC ERO (and Tony of course isn’t a NERC staff member. His day job is NERC Compliance Manager for the Nebraska Public Power District). Since I didn’t take enough notes to produce a summary of what he said, I’ll just list the points that struck me as very interesting (and I’ll admit that I filled out a few of Tony’s ideas with related ideas of my own. If you want to hear exactly what Tony said, listen to the recording!).

  • He emphasized that CIP-013 R1.1 is a real requirement. Many people have just focused on the six items in R1.2, and consider those the totality of what CIP-013 “requires” an entity to do. But as I’ve said many times before, while those six items all constitute very important controls that address serious supply chain security risks to the BES, they are only listed because FERC mentioned, at different places in Order 829 in 2016, that they wanted them included in the new standard. So the drafting team dutifully included them.
  • R1.1 requires the NERC entity to develop a supply chain cyber security risk management plan. To develop that plan, the entity needs to identify the supply chain security risks that it considers most important, as well as identify mitigations for those (although the drafting team left out the word “mitigate” in R1.1, there’s no doubt that they need to mitigate risks, not just identify them and call it a day).
  • Because each entity will identify different risks as important to it, the plans will definitely differ from entity to entity (although they will undoubtedly have many common elements).
  • If your entity is compliant with another standard that addresses supply chain security (e.g. NIST 800-53), you still need to develop an R1.1 plan. Similarly, if a vendor tells you they have a certification like ISO 27001, you still need to make sure the certification addresses the particular risks that you think are important. Different certifications address different risks, and of course there is currently no certification that specifically addresses supply chain cybersecurity risks to the BES (although Howard Gugel of NERC said at the CIPC meeting that afternoon that NERC will work with EPRI to develop one).
  • NERC entities don’t need to have an RFP on the table, in order to ask security questions of vendors. They should do it when needed, although of course they shouldn’t ask the questions so often that they impose a significant additional cost on the vendor – that they presumably never anticipated when they quoted their prices, in an RFP response or otherwise. There is more discussion of this topic in the Vendor Risk Management Lifecycle paper that I presented. Anyone wishing to see the draft that I discussed on Tuesday should email me at the address below.
  • While contract language is usually a good way to mitigate risks posed by vendors, this isn’t an option for purchases made by credit card, purchases through dealers (as is often or usually the case with large suppliers like Cisco and Microsoft), or purchases made using a standard PO. Tony’s point was that the vendor risks, which the entity might otherwise mitigate through contract language, need to be mitigated in any case. The Vendor Risk Management Lifecycle white paper discusses other ways to document that a vendor has committed to do something, besides security language. And that paper also briefly mentions other steps the NERC entity can take on its own, if the vendor simply refuses to cooperate, but is too important to terminate. The risks identified in the entity’s supply chain cyber security risk management plan need to be mitigated, regardless of whether or not the vendor cooperates – although they can usually be more effectively mitigated if the vendor does cooperate.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Friday, May 31, 2019

An ex-auditor makes a great comment on vendor risk

Kevin Perry, former Chief CIP Auditor of SPP RE, retired last year, but still reads my posts (after all, what better way to spend your retirement?) and has often corresponded with me on them – as he often did while he was an auditor. He sent me the comment below, regarding my most recent post on CIP 13. He said:

I look at it this way...  the contract language or other documented agreements simply show what you agreed to, and doesn’t guarantee performance.  The RFI and other procurement solicitation documentation shows you tried, even if the vendor will not agree to your requests.  But what you really need to focus on is managing your own risk and not assigning it to the vendor.  What can you do to mitigate vendor risk, as opposed to what will you presume the vendor is doing to mitigate your risk?  If you approach the issue with an assumption that the vendor will fail, then your mitigation will be better than if you assume the vendor has your back.  It is not much different than network security between two companies.  You mitigate risk through mutual assume something bad will get on your partner’s network and thus you build your own defenses at your perimeter.

I couldn’t have said it any better! And I would certainly have taken a lot longer to say it…

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Tuesday, May 28, 2019

Lew Folkerth’s latest article on CIP-013, part II

Three weeks ago, I wrote a post on Lew Folkerth’s latest article (i.e. his third) on supply chain cybersecurity risk management/CIP-013. When I set out to write it, I thought I could do this in a single article. This was silly, since for each of the first two articles in Lew’s series, I’ve written two posts. This one turned out to be the same, so here is my eagerly-awaited second post on Lew’s third article.

I want to point out that, in the first post, I gave directions for downloading Lew’s third article from the RF site, and said it would be a day or two before you’d be able to do that. Instead, it turned out to be a few weeks (Lew had been overly optimistic when he told me it would be up that soon; the process for putting it up includes legal review and…well, you know how that can be sometimes). In any case, you can download the article now. It is item 31 on the list you’ll see (i.e. the last one).

Lew did point out in this article that it won’t be his last on the subject, since there will be a fourth (presumably in the next issue of RF’s newsletter, which will be out in June). That one will be on the new versions of CIP-005 and CIP-010, which will come into effect when CIP-013 does (7/1/20, in case you don’t have that tattooed on your arm yet).

I prefaced the first post by saying that I think what Lew said in this article is all very good, but I do have some disagreements. I went on to list four of these disagreements, although I admitted that the last of them was really a criticism of the drafting team for being unclear. Regarding the last one, my interpretation of R1 differs from Lew’s, but neither of us can be said to be right or wrong, given the ambiguity in the requirement itself (although if you really want to go to the root of the problem, you need to blame FERC. They very unrealistically gave NERC only one year to a) develop the new standard; b) submit it to a vote by the NERC ballot body multiple times; c) get it approved by the NERC Board of Trustees; and finally d) submit it to them for their approval. The drafting team accomplished this, but they naturally didn’t have much time to ask themselves whether the wording was completely clear. Of course, once NERC submitted CIP-013 to FERC, the Commission then took 13 months just to approve it! So much for the big rush…).

My next disagreement with Lew is also at heart a complaint about the drafting team, since this is about another question for which there’s no right or wrong answer: What are the primary elements that need to be in your supply chain cyber security risk management plan, which is mandated by R1.1? Lew’s article lists three elements (which he calls “required processes”):

  1. “Planning for procuring and installing” (it would be clearer if this were followed by something like “systems in scope” - which are currently BES Cyber Systems, but will include EACMS and PACS in a couple of years. There also seems to be a big movement by FERC at the moment to include Low impact BCS in some way; it seems they’re being driven by Congress in this matter. However, the NERC ballot body will first need to approve a SAR, and before one can even be drawn up, NERC wants to submit a Data Request to the membership on Low BCS and analyze the results. So this isn’t any near-term likelihood).
  2. “Planning for transitions” (i.e. transitions between vendors. This is specifically referred to in R1.1).
  3. “Procuring BES Cyber Systems”

Lew says in his article that he thinks the first item (i.e. “Planning”) is the goal of R1.1 and the third (“Procurement”) is the goal of R1.2. In my first post, I explained one reason why I think he’s wrong on this: R1.2 doesn’t have any special purpose. It’s there simply because FERC, at random places in their Order 829, said that these six items should all be included in the new standard; the drafting team decided to group them all together in R1.2. My interpretation of the purpose of R1.1 is that it requires identification and assessment of five types of supply chain cybersecurity risks, which I’ll list in a moment. My interpretation of the purpose of R1.2 is that it simply lists six mitigations that must be included in the plan, but they are far from being the only mitigations in your plan! As Lew makes clear in his article, which concludes with a list of 13 important risks that he thinks NERC entities should consider in their plans, there are a lot of other important risks and mitigations to consider – not just the six (actually eight, since two of these have two parts) in R1.2.

But there’s another reason why I don’t think R1.1 is about planning and R1.2 is about procuring. Lew forgot that both R1.1 and R1.2 are simply callouts from R1 itself, which reads “Each Responsible Entity shall develop one or more documented supply chain cyber security risk management plan(s) for high and medium impact BES Cyber Systems. The plan(s) shall include…”.

In other words, your R1 plan must include two things. The first is a process for identifying and assessing supply chain risks to the BES (R1.1), while the second is the six specific items (mitigations) that FERC said must be included in your plan (R1.2). I think it’s a mistake to read anything more than this into R1.1 and R1.2.

As I said earlier, I believe there are five areas of supply chain security risk that need to be included in your R1.1 plan. They are all mentioned in R1.1, but I’ll admit that a couple of them are very well hidden:

  1. Procurement of BCS hardware and software. This is of course the one that everyone talks about; in fact, I’m sure most people now think that CIP-013 is all about this one area of risk. I agree it’s by far the most important of the five areas, but the entity needs to address the other four areas as well – although, since this is a risk-based standard, there’s no obligation for the entity to devote the same amount of effort to each of these five areas, if they don’t think the other four pose the same degree of risk as this one.
  2. Procurement of BCS services. R1.1 says your plan must identify and assess risks to the BES arising from “vendor products or services”. I’m surprised Lew doesn’t specifically mention services in his three required processes, but he certainly does so elsewhere in his article.
  3. Installation of BCS hardware and software. FERC made it very clear in Order 829 that they were almost as worried about insecure installation of BCS as they were about insecure procurement of them in the first place (they had the first Ukraine attacks in mind, which had happened seven months earlier. In those, a big contributing factor was that the HMI’s were installed directly on the IT network, which shouldn’t have happened had there been a proper assessment of installation risks). So R1.1 specifically states the entity should assess risks of “procuring and installing vendor equipment and software”.
  4. Use of BCS services. This isn’t specifically stated in R1.1, but I think it’s directly required by the words “procuring and installing”. You don’t “install” services, but you do use them. And I think it’s clear that FERC wanted this, since they mandated three items in R1.2 that involve risks from vendor services after they have been procured. R1.2.3 deals with vendor service employees who leave the company. R1.2.5 deals with patches provided by vendors, which is a service (although not one that most software vendors charge for). And R1.2.6 deals with vendor remote access, which is of course also a post-procurement vendor service. So these are three examples of using BES services.
  5. Transition between vendors. This is explicitly called out in R1.1, and it’s also on Lew’s list.

This next item isn’t a disagreement with Lew at all: I was very interested that he made a point of saying that, even though you have lots of freedom in drawing up your risk management plan in R1.1, when you get to R2, that freedom goes away. You must implement your plan as written, and if you don’t, you’ll potentially be in violation of R2. So, while you should certainly do your best to identify, assess and mitigate risks in R1.1 and R1.2, you do need to be careful not to promise to mitigate more risks than you will be able to handle. For this reason, I’ve advised my clients to be conservative in committing to mitigate risks in their R1.1 plans - e.g. they might just commit to mitigate those they rank as high, on a low/medium/high scale. If it turns out later that they decide there are other risks they should mitigate as well, they can always add them to the plan and not risk being in violation.

Side note: Someone who hasn’t already been put to sleep by this post (I’m sure that applies to one or two people at least, maybe three or four) will jump up and yell (loud enough for me to hear in Chicago) “You inserted the word ‘mitigate’! But that isn’t in R1.1!” And that person would be absolutely right. R1.1 doesn’t say anything at all about having to mitigate the risks you “identify and assess”. It’s as if the drafting team were saying “All we care about is that you know what your risks are. But no matter how big and hairy they are, rest assured that we don’t expect you to do anything about them. Once you’ve identified and assessed your supply chain cyber risks, you can forget all about supply chain security, throw that list in the trash can, and go back to your normal activities (perhaps CIP-002 through CIP-011 compliance, or perhaps just lying on the beach).”

However, this would make absolutely no sense. FERC didn’t order NERC to develop a supply chain standard just because they thought it would be a really interesting intellectual exercise for NERC entities to identify their supply chain risks; they did it because they thought (and still think) that supply chain is one of the most critical sources of cybersecurity risk worldwide and across all industries (although especially for the power industry, with the Russian attacks being exhibits A, B and C), and those risks need to be mitigated. They even ordered six specific mitigations be included in the supply chain security plans, which the SDT collected into R1.2. And literally every document that’s been written about CIP-013 (e.g. the SDT’s own Implementation Guidance) focuses on mitigating risks, not just on identifying risks in the first place.

So even though the word “mitigate” isn’t in R1.1, I definitely think you should read it in after the words “identify and assess” – i.e. the entity’s plan needs to identify, assess and then mitigate supply chain cyber risks to the BES, not just identify and assess them. I don’t think your CIP-013 plan will fare very well at audit, if it just list risks but says nothing about mitigating them!

Lew also points out that the advice to implement your plan exactly as written doesn’t apply in the case of prescriptive requirements, like most of those in CIP-002 through CIP-011.  He gives this illustration: “..if your personnel risk assessment process created by CIP-004-6 Requirement R3 says that you will perform personnel risk assessments every five years, but you miss that target by a year for some personnel, then that should not be a violation as you are still within the timeframe prescribed by the Standard.” (of course, that timeframe is seven years)

However, since your CIP-013 R1 plan is itself what you are required to implement in R2, this means that any significant shortfall in implementing it might be considered a possible violation of R2. My guess is a moderate shortfall won’t be considered a possible violation, but the auditors could issue an Area of Concern, asking you to fix this problem by the next audit. Either way, it’s better to simply do for R2 exactly what you said you’d do in R1. And the converse of this is that you shouldn’t commit to doing more in your R1 plan than you are sure you can accomplish in R2. I think this is the biggest source of potential compliance risk in CIP-013-1.

Near the end of his discussion of R2, Lew says “Both contract language and vendor performance to a contract are explicitly taken out of scope for these Requirements by the Note to Requirement R2. I recommend that you do not rely on contract language to demonstrate your implementation of this Requirement.” I both agree and disagree with these statements, but that takes a bit of explaining:

  1. I disagree that contract language and vendor performance (although that should really be “non-performance”. There’s definitely no risk that you’ll be held in violation if your vendor performs what they said they’d do!) are “taken out of scope” by the note to R2. You should still try your best to get a vendor to agree to contract language that you request, and you should still try to get them to do what they said they’d do.
  2. The note in R2 is really saying that neither the actual contract terms and conditions, nor the fact that a vendor didn’t do what they promised, can be the subject of required evidence for compliance; and if your auditor asks you to provide this evidence, you are within your rights to refuse.
  3. I doubt that Lew would disagree with the above two statements, but I think he’d still be missing the real point: It doesn’t matter how you document the fact that your vendor has agreed to do something. They might do this in contract language, they might give you a letter or an email, or they might just state it verbally. The big question is, did you verify whether they kept their promise or not?
  4. You do need to have evidence about what you did to verify that the vendor did what they promised. Maybe it will be a letter or emails you’ve saved. Maybe it will be notes regarding phone calls. Maybe it will be documentary evidence, like screen shots showing that the vendor digitally signed their software patches, or a vendor’s documentation of their procedures for system-to-system access to your Medium BCS. The best evidence will vary according to the particular promise the vendor made, but there will always be some evidence you can gather.
  5. Of course, if the vendor simply refuses to let you know whether they’ve kept their promises, you’re certainly not in violation because of that. But you very well may be in violation if you haven’t even tried to verify whether or not they kept their promises in the first place.
  6. Another point that goes beyond what Lew said: What happens if the vendor refuses to promise to do anything? Do you just throw up your hands and say “Oh well, we tried”, and move on to the next challenge? No. If you’ve identified a risk as being important enough to mitigate, you have to take some steps to mitigate it. If the vendor refuses to notify you when they’ve terminated someone who had access to your BCS, you can deny their employees any unescorted physical access to your BCS – which will raise costs for the vendor (and also for you, of course). If the vendor refuses to promise to improve their security practices in their software development environment, you can put their systems on the bench and scan and test them for a month or two before you install them – or better yet, you can stop buying that vendor’s software! There’s always something you can do to mitigate the risk in question, even absent any cooperation from the vendor.

But I do agree with Lew that there is far too much emphasis on contract language as being the best way to mitigate supply chain risks. For one thing, contract language just isn’t a good option for some entities (especially Federal government ones), who have little to no control over contract terms. For another, how many of you have a contract directly with Cisco? I would guess the answer is few to none, since almost nobody buys their Cisco gear directly from Cisco. You buy it through a dealer or integrator, and you probably have some sort of contract directly with them. But the dealer isn’t going to make any commitments on behalf of Cisco – and it’s Cisco that needs to give you a means of for example verifying patch integrity and authenticity, not the dealer.

A third example: If you ever buy something from Best Buy or eBay, I can promise that you’re not signing a contract guaranteeing the manufacturer has certain cybersecurity controls in place (there is a good discussion of contract language in the white paper on “Vendor Risk Management Lifecycle”, currently being drafted by the Supply Chain Working Group. These papers will be presented to the CIPC at their meeting in Orlando next week, and sometime after that, if approved by the CIPC, will be available on NERC’s website. If you want a preview of the argument in that document, you can send me an email).

So I was quite happy to see that Lew doesn’t subscribe to this mistaken view of contract language as the be-all-and-end-all of CIP-013 compliance. But I’m sorry to say that his reasons are wrong. If an entity wants to rely on contract language as their preferred means of documenting the fact that their vendors have agreed to implement certain security controls, that’s fine. Of course, the auditor won’t be able to  compel the entity to show the contract itself, but the evidence that’s really needed is the evidence that the entity made some effort to verify whether the vendor was keeping its promises, no matter whether they were inscribed on golden tablets and stored in Fort Knox or whether they were written in disappearing ink on a scrap of parchment in a bottle buried on the beach of a desert island. That is the evidence that’s important.

This concludes my analysis of Lew’s third article on supply chain cyber risk management and CIP-013 compliance. As soon as the fourth article on this topic comes out in June, you can be sure I’ll have something to say about that as well!

And by the way, Lew, I hope this won’t be the end of your articles on CIP-013, since there’s much more to be said. I know what, let’s make a deal…You can stop writing your articles when I stop writing blog posts about CIP-013. I think that will be around 2030. By that time, I figure everything will have been said that needs to be said.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Thursday, May 23, 2019

Just in case you thought the Russians were our friends…

Blake Sobczak of E&E News struck again yesterday, with another great article that points to the heart of the biggest cybersecurity threat faced by the power industry today: namely, the ongoing Russian campaign to penetrate the grid and plant malware in it. And as usual, he didn’t have to jump up and down to make his point – he merely quoted from an important government official.

This article unfortunately isn’t available outside of the paywall, but I’m of course free to excerpt from it. Here is essentially the first half of the article (which is quite short – something that’s unusual for Blake’s articles, as well as my posts. Both of us understand the importance of not letting a foolish concern with conciseness get in the way of saying what needs to be said!):

Russian hackers pose a greater threat to U.S. critical infrastructure than their Chinese counterparts, a former intelligence official warned water utility executives in Washington yesterday.

"When I think about the Chinese and the Russians, they're both dangerous: Both of those are in conflict with us," said Chris Inglis, former deputy director of the National Security Agency. "But the Russians are far more dangerous because they mean to do us harm. Only by doing us harm can they achieve their end purposes."

Beijing poses a major cyberespionage threat to U.S. companies but, in contrast to Russia's government, can be more effectively deterred based on its close ties to the American economy, Inglis said at a cybersecurity symposium hosted by the National Association of Water Companies.

"Why are the Russians, as we speak, managing 200,000 implants in U.S. critical infrastructure — malware, which has no purpose to be there for any legitimate intelligence reason?" asked Inglis, now managing director at Paladin Capital Group and a visiting professor at the U.S. Naval Academy. "Probably as a signal to us to say: We can affect you as much as your sanctions can affect us."

I was actually surprised to see this, since everything else I’ve seen or heard from the Federal government recently seems to downplay a) the threat posed by Russia’s ongoing attacks on the US grid and especially b) the success the Russians have had so far (of course, it’s probably significant that Mr. Inglis isn’t currently part of the government. The article mentions that he may lead the NSA in the near future, and if he does, I hope he doesn’t catch the strange bug that seems to have infested a lot of his former colleagues on the cyber ramparts of the US economy). He says two important things:

  1. The Russians’ purpose is clearly malign – to have the capability to cause significant disruption to our society (to say nothing of disabling US military bases - as described in a January article in the Wall Street Journal), and perhaps even to cause a cascading power outage that could immobilize a lot of the country; and
  2. They have already had a significant amount of success, evidenced by the fact that they are currently managing (i.e. the devices are already in place and connected to C&C servers) 200,000 “implants in U.S. critical infrastructure”, which presumably includes other CI industries like oil and natural gas pipelines, water treatment plants, oil refineries, and petrochemical plants, besides power facilities. 

I’m also very impressed with the fact that Mr. Inglis gives short shrift to the popular (again, in current Federal government circles) idea that the Chinese and Russian attacks on US critical infrastructure are essentially two peas in one pod. Here’s the quote again: "But the Russians are far more dangerous because they mean to do us harm. Only by doing us harm can they achieve their end purposes." Amen, brother. And he’s not the only person saying this: the Russians themselves are!

A paragraph after the above section, the article says “Energy and water utilities' interest in Chinese and Russian cyberwarfare capabilities has spiked since January, when U.S. intelligence director Dan Coats assessed that either country could disrupt U.S. critical infrastructure by cutting off a gas pipeline or temporarily disabling part of the power grid.”

You know, I’d almost forgotten about that! The Director of National Intelligence, as well as the heads of the FBI and CIA, went before the Senate Intelligence Committee in January to discuss their Worldwide Threat Assessment for 2019, which said “Moscow is now staging cyberattack assets to allow it to disrupt or damage U.S. civilian and military infrastructure during a crisis.”

In normal times, one would have expected this story to set off a frenzy of activity in the Federal government and the power industry to investigate what actually happened, so that the malware could be identified and rooted out, and so that defenses could be beefed up to prevent further penetration. But these are evidently not normal times, since despite my complaints (or perhaps because of them), there is no visible movement on the part of anybody with responsibility for grid security to investigate what the report says. This is in stark contrast to the Ukrainian attacks in 2015, which set off a firestorm of investigations, reports, classified and unclassified briefings, etc. Why am I concerned about this, you ask? After all, why would I expect the US government to treat the US and Ukrainian grids equally? I didn’t expect that, of course. But I kinda know…that they would be more concerned with the US grid than the Ukrainian one. Silly me.

So now we have Mr. Inglis putting a number on the problem, saying there are 200,000 implants already in place. This is about a thousand times more than I would have suspected. This will set off a real investigation, right?...Ya gotta’ be kidding.

To quote the ancient Greeks, “Those whom the gods wish to destroy, they first make mad.”

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Tuesday, May 21, 2019

Our RSA panel – recording available!

I must confess that, at least two months after it was available, I just listened today to the recording of the panel that I was on at the RSA conference in early March this year. Our title was “Supply chain security for critical energy infrastructure” We had all agreed afterwards that it went very well, and guess what…I can now confirm it went very well!

The panel members were the same ones as the panel I was on in 2018: Dr. Art Conklin of the University of Houston (O&G security expert), Marc Sachs, former NERC CISO and head of the E-ISAC and me. What was different was that our moderator in 2018, Mark Weatherford, had to bow out and was replaced this year by Sharla Artz, VP of Government Affairs, Policy & Cybersecurity of UTC.

Last year, we were told by the conference that some reviews pointed out that we agreed with each other too much, so it made the session kind of boring. Even though we didn’t consciously try to pick fights with each other, there was lots of disagreement (friendly of course). But more importantly, I think the content is very good, both in the panelists’ discussions and the Q&A afterwards (where we had some really good questions). You may want to listen to the recording: there are a lot of good points about supply chain security, CIP-013 and cyber regulation in general.

Plus a lot of humor. Marc had had neck surgery recently because – as he said – he’d jumped out of too many airplanes when he was in the service, so he was wearing a neck brace. There were various jokes that the real story was that we’d gotten into a fight at a bar the night before, when we met to discuss the panel (that’s patently not true. We didn’t meet in a bar) - although I helpfully pointed out to Marc afterwards that the next time he jumps out of an airplane, he should wear a parachute! He thanked me for this good advice, but said his doctor says no more jumping out of airplanes.

My favorite part (which I didn't remember until I listened to the recording) was around 15:30 in the recording, when Art told a great story about risk management. He said that the security people at DoD had wanted to spend a lot of money (and since we're talking about DoD, I assume this is a whole lot of money) on some sort of widget that would solve some security problems for one part of the organization.

When they went to the higher levels of DoD to get the funding, they were asked whether there was some way in which DoD could spend the same amount of money - or even less - and mitigate a greater amount of risk. The security people answered "Sure, we could upgrade the whole Department to Windows 10 and finally get rid of all the old versions that are hanging around, causing security nightmares." But they were told "Oh no, we can't do that. It would be too hard."

So DoD went with the widget solution and spent more money mitigating less overall risk, because it was easier for them to do. This is a great example of why any security program should start with a risk assessment, and focus resources on the threats that pose the most risk; it is only by doing this that the entity can be assured of getting the most bang for their buck, in terms of total risk mitigated. And guess what! Not only is this the best way to comply with CIP-013, you're actually required to do this by R1.1!

All four of us are hoping we’ll be chosen to do the panel again next year, and that we’ll all be able to participate again. I think the group has developed a great collaboration style, so that the discussion is both very entertaining and very informative.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Wednesday, May 15, 2019

A great paper on software security

The NERC CIPC Supply Chain Working Group is now preparing six white papers on supply chain cyber security risk management, that will be presented to the CIPC members (and anyone else who wants to attend) before the next CIPC meeting, the morning of June 4 in Orlando. As I anticipated when I joined the committee, the white papers will be nice to read in their final versions, but the best benefit of participating in developing them is the meetings themselves, at which some very knowledgeable people (along with me!) have very good discussions that usually go far beyond the subject matter of the white papers.

The papers are constrained to be around three pages long, meaning their purpose is mostly to point out issues in general terms, rather than to solve them. However, I can promise they’ll all be excellent. The SCWG will be meeting on the afternoon of June 3 to discuss the way forward, including whether it should continue developing more papers, since – surprisingly enough – there are far more than six topics that can be discussed in the area of supply chain security for the BES!

One of those groups, which is developing a white paper on risks associated with open source software, is led by George Masters of SEL. Last week he said he’d just read a very good paper on software security and recommended we all take a look at it. I did, and I completely agree with him that it’s excellent.

Of course, you can decide for yourself what you think of the paper, but I highly recommend you at least look at it. You’ll notice that the authors are quite critical of the software industry in general. In my opinion, they overstate their case in implying that software consumers are being duped by software developers who don’t make much effort to find vulnerabilities in their software before shipping it out, since they can always find them later and patch them.

Even if it’s really that bad, I – speaking as a software consumer who has approved many EULA’s that I haven’t even looked at – don’t at all feel I’m being misled. I know quite well that there will be various vulnerabilities in the software, and new ones will be found all the time. I also know that a bigger diligence effort by the developers would probably be able to make a big dent in the problem. However, I also know that this will probably have a sizable impact on the price of the software. Am I willing to make that trade-off? Fortunately, I have no other option. The market as a whole has already made the choice for cheaper software with more potential vulnerabilities. They can either buy it or not. But they’re not being totally misled in making that choice.

I also feel the authors are exaggerating the duplicity of the developers. Take Microsoft. I can remember in the 90’s and early 00’s when their security level was somewhere around zero, if not less – Windows 95 comes to mind. But they made a very concerted effort to change that, and I don’t feel at all insecure using Windows 10.

In any case, this is definitely a paper you should look at, and hopefully read. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.