Monday, December 30, 2019

Second (actually, third) thoughts about the cloud



Up until last summer, I said frequently something to the effect that there’s no good reason – other than the current wording of CIP-004 - why NERC entities shouldn’t be able to store information on their OT systems in the cloud. After all, I reasoned, there isn’t a single electric utility organization, no matter how large, whose level of security isn’t far surpassed by that of any major cloud provider. How could it possibly be otherwise, since cloud providers have to protect thousands of customers and a utility only has to protect itself? The fact that there hadn’t been any major cloud breaches reported at the time was all the evidence I needed for my position.

Another consideration that reinforced my thinking: FedRAMP is much more stringent than NERC CIP or any other cyber regulation that most industries have to deal with (probably including the U.S. military and the U.S. nuclear industry). If a cloud service provider has that certification (and the big ones all do), what could possibly go wrong?

The answer to that question came when the Capitol One breach was revealed last summer to be the work of a terminated ex-employee of AWS who had been able to penetrate at least 30 companies’ environments on Amazon’s cloud – and had bragged online about how no Amazon customer had a particular service configured correctly. This pointed to a big vulnerability at Amazon that was going to require a big effort by Amazon – and also their customers, although much more by Amazon itself – to fix. And it’s a vulnerability that I’m sure FedRAMP doesn’t address now.

However, I have to admit that I continued to believe that the cloud was far safer than any single utility, once AWS and its competitors got the problems that led to Capital One and Paige Thompson taken care of. But now there are two more news stories that have definitely left me wondering about this. This is because both of these stories show that there actually can be cloud-based attacks that impact multiple customers at once – the dreaded common-mode vulnerability.

One of those stories was by Rob Barry and Dustin Volz in today’s Wall Street Journal (note: The link goes to a non-paywall version of the article, but Rob is worried that may not always be available. Here is a link to a PDF of the article on Rob's personal web site, in case that happens). In the article, the reporters describe in great detail how Chinese attackers had conducted a long-term campaign to infiltrate cloud providers and hop from one customer’s cloud environment to many others’ – all the while stealing terabytes of valuable data. This was something I always believed was impossible. Even Paige Thompson didn’t do this – she attacked each of her victims individually, going through the firewalls (which were their responsibility, not AWS’s, of course) in front of their AWS environments. Ever efficient and resourceful, the Chinese seem to have leapfrogged over what she could do.

The other story was forwarded to me about a month ago by Kevin Perry, retired former Chief CIP Auditor of SPP Regional Entity. This one describes how some cloud-based service providers[i], and managed service providers utilizing the cloud as their computing environment (which I would imagine just about all MSPs do now, given how much more efficient that makes them), have become infected with ransomware and have spread it to their customers.

Kevin pointed out in an email “Purportedly, the group of schools hit earlier this year in Texas were attacked through their MSP.” This is quite interesting: A number of members of one particular “industry” (education in this instance) all are attacked through their cloud-based MSP. What would happen if you just substituted “utilities” for “schools” in Kevin’s statement (and maybe “the US” for “Texas”)?

Of course, the solution for Paige Thompson, and these latest two stories, is encrypting the data being stored in the cloud – and the BCSI Access Management standards drafting team is making that the centerpiece[ii] of their revisions to CIP-011 and CIP-004. These will, when enacted, hopefully allow NERC entities to feel safe storing BES Cyber System Information in the cloud (which a number of entities are already doing today).

However, the recent NERC CIPC meeting also made clear that not everybody in the CIP community agrees with the SDT’s approach – and they would like more consideration given to using FedRAMP (or possibly other certifications) as the standard for evidence of the security of a cloud provider. If one of those people wishes to state their position to me, I’ll be glad to publish it, even without mentioning their name.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.


[i] Which isn’t the same thing as a cloud service provider, of course. We’re not talking about MS Azure and AWS here.

[ii] Actually, when John Hansen of Exelon, the chairman of this SDT, spoke to the recent NERC CIPC meeting in Atlanta a few weeks ago, he made clear that they’re not requiring encryption per se, but any means of masking the data so that it’s only readable by users equipped with a software key. The new CIP-011 includes a requirement for protection of those keys, which of course is “key” (OK, bad joke) to the success of this effort. You can read their current first draft here.

Sunday, December 29, 2019

Lew Folkerth on remote access compliance and security

Lew Folkerth's most recent article addresses remote access, including compliance with CIP-005 R2.1 - R2.5 and with CIP-005 R1.3. Lew provides his usual mix of good compliance advice and good security advice. And he doesn't particularly try to separate the two, since he points out a number of ways in which good security practices aren't strictly required, but they will reduce your compliance risk as well as your security risk - e.g. by documenting that there is no Interactive Remote Access coming into your ESP that isn't coming through an Intermediate System.

As always, Lew's articles are worth reading, even if you don't have to comply with NERC CIP for Medium or High impact assets.



Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.

Thursday, December 26, 2019

Is CIP-013 prescriptive or non-prescriptive? Yes, definitely



Since the beginning of 2019, I’ve been working almost exclusively on helping clients prepare for compliance with NERC CIP-013. I’ve had a number of “Aha!” moments, when I realized something important that in retrospect should have been obvious from the beginning. For example, I’ve known for a long time that three things are true:

  1. CIP-013 R1 is a non-prescriptive requirement. It tells the entity to develop a supply chain cyber security risk management plan (which I once christened SCCSRMP, taking the idea from Kevin Perry. Since this is the only logical acronym, I’ve been surprised not to hear anyone use it besides me. This may be because the most logical pronunciation of it – “Scuzzy Rump” – sounds vaguely obscene). However, other than saying that the six items in R1.2 need to be included in the plan, it provides no other direct information, other than that the plan must address the five areas of 1) procurement of hardware and software components of BES Cyber Systems; 2) installation of those components; 3) procurement of services for BCS; 4) use of those services (I’ll admit this one is a little hard to discern in the wording, but I promise you it’s there, and R1.2 confirms the SDT had this in mind); and 5) transitions between vendors of BCS components.
  2. CIP-013 R2 requires the entity to implement the SCCSRMP. In fact, that is literally all the requirements says.
  3. Many NERC entities have had unfortunate audit experiences with other plan-based CIP requirements, such as CIP-008 R1, CIP-009 R1, CIP-011 R1, CIP-007 R3 and CIP-010 R4. These entities have found out the hard way that it’s important to make sure you implement everything in your plan, regardless of whether something in the plan is directly stated in the requirement or not. If something is in the plan, you need to carry it out, or at least document why you can’t carry it out in a particular instance.
If you had asked me three months ago whether R2 was also a non-prescriptive requirement, I would probably have said yes, since it seems logical to say that a requirement to implement a plan that isn’t prescriptive is non-prescriptive in itself. However, in combining the above three pieces of information, it dawned on me that this really makes R2 a prescriptive requirement.

Of course, at first this seems strange (and it would have seemed strange to me even three months ago). After all, when you compare CIP-013 R2 to my poster child for a prescriptive requirement, CIP-007 R2 (with CIP-010 R1 a close number two for that coveted title), there’s a huge difference. CIP-007 R2 tells the entity exactly what they need to do, exactly when they need to do it, and exactly what systems they need to do it to, for a set of perhaps ten different actions (some of which aren’t directly stated, but are implicit when you start looking at the language closely). But CIP-013 R2 just tells you to implement the plan from R1, while R1 tells you nothing about what actions should be required in that plan, and nothing about the timing of those actions).

But as I thought about this, I realized there’s no way R2 could be anything but prescriptive. Since it tells you to implement the plan, this means the plan itself provides the set of prescriptive requirements that you have to follow in R2. In other words, rather than just follow the current wording of R2 (which rivals the great haiku masters in conciseness), NERC entities are well advised to effectively “replace” that sparse wording with the “requirements” included in your SCCSRMP! Then you should comply with that wording in the same way you comply with other plan-based CIP requirements, like the five I cited above (BTW, a sixth plan-based requirement is CIP-003-7, which comes into effect 1/1/20).

But note that I said you need to comply with R2 as you would with other plan-based CIP requirements. This was a term I used to use a lot, but now I mostly use the term ‘risk-based’. At first I thought these referred to two different things, but in the last couple years I came to realize they’re actually the same thing, for the following reason:

A plan always has an objective (which is why I also consider ‘objective-based’ to be synonymous with ‘plan-based’). Unless your organization has unlimited resources to throw at BES security and NERC CIP, you’ll always need to choose between measures you will take and measures you won’t take - because you feel the results achieved by the latter won’t justify the resources required to achieve them.

And how will you decide which measures you’ll take and which you won’t? You will either explicitly or implicitly look at risk – there’s no other way to do it. For each measure, you’ll determine (perhaps without explicitly considering it) the degree of risk it mitigates. You’ll choose to implement the measures that mitigate the most risk and not implement those that implement the least risk (in fact, this pretty neatly describes the first half of the CIP-013 compliance methodology that I’ve developed with my clients this year). Because why would you want to spend your resources (money and time) conducting activities that are less effective, when you could spend the same amount of resources and mitigate much more risk? Unless you don’t care whether or not you waste money, of course.

Since risk-based, plan-based and objective-based requirements are one and the same, where does the prescriptive/non-prescriptive dichotomy fit in? I think a non-prescriptive requirement is the same thing as a risk-based, plan-based or objective-based one. I simply can’t think of a case where it might be otherwise (although if anyone can, I’d like to hear about it). And it’s without a doubt true that a prescriptive requirement could never also be one of these four categories.

What’s the difference between complying with plan-based requirements like CIP-011 R1 and true prescriptive requirements like CIP-007 R2? In case you didn’t know this, NERC has a very prescriptive auditing regime. The reason CIP-007 R2 is so burdensome (it’s by far the most-complained-about CIP requirement) is that the NERC entity needs to have documentation of a) every step that was taken, b) for every piece of software in the ESP(s), and c) every time it was taken, over the entire audit period. So if an entity has 200 separate software packages in their ESP, and they have to comply with CIP-007 R2 eleven times a year over the three-year audit period, this means they need to have 6600 pieces of documentation.

And every NERC compliance professional can state, while hanging upside down with their hands tied behind their back in their sleep, the iron rule of NERC compliance: If you didn’t document it, you didn’t do it. Woe betide the unfortunate person who misses one of those 6600 pieces of documentation and then is asked to produce exactly that at audit! Yea verily, all the furies of Hell couldn’t match the anger of that person’s boss, when told the entity has received a PNC finding because of this![i]

And how do you comply with a plan-based requirement, especially CIP-013 R2 (since that’s what this post is about)? In general, you need to decide what are the important elements of your plan and document that you have a program in place to comply with each element. While you will certainly have to retain some documentation of individual instances of compliance, you definitely don’t have to document every instance in which you complied and every system you complied for.

And if you’ll wake up our NERC compliance professional, cut them down from the ceiling and untie their hands, they’ll agree with you that there’s a big difference between complying with a prescriptive requirement and a non-prescriptive one, even if they both require you to take the same set of actions on the same set of systems. So this is one big advantage of the way CIP-013 R1 and R2 are written.

The second advantage is even bigger: After all, you are the one that writes your plan. If you think there’s something in your plan that might be hard for you to comply with (for example, you stated that a particular action needs to be done every month for every system, regardless of the risk posed by that system), what you should do is very simple: Take it out. This is perfectly “legal”, although if you do it after the compliance date, you will need to document why you did that.

Just try that with CIP-007 R2!


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.

[i] In case you’re going to point out that the solution for having to produce so much audit documentation is to sign on to the Risk-based CMEP program, you can save your breath. I find it hard to believe this cure isn’t almost as damaging – in terms of gallons of stomach acid produced while worrying about compliance (the key measure, IMHO) - as the disease itself.

Sunday, December 22, 2019

Do the NERC CIP standards drive grid investment?


Blake Sobczak of E&E News on Friday published the retrospective below (part of a series of articles on major energy events of the last decade), which put the two Russian cyberattacks on Ukraine’s power grid in their larger context. It’s a very good article, although I don’t think there’s anything in it that will be terribly surprising to anybody reading this post (and Blake said as much when he sent me the text). But I do want to point out one thing:

Midway through the article, Blake says “NERC and the Federal Energy Regulatory Commission took lessons learned from the 2015 and 2016 Ukraine attacks and incorporated them into new cybersecurity rules for the bulk power sector. Changes to the so-called Critical Infrastructure Protection standards brought about hundreds of millions of dollars in new cybersecurity investments across the U.S. grid.”

Actually, the Ukraine attacks haven’t led to any changes in the NERC CIP standards that are currently in effect. One change that did result from them was CIP-013, since in Order 829 FERC pointed to the first Ukraine attack (which had occurred about seven months previously) as one of their reasons for ordering NERC to develop a supply chain security standard.

CIP-013 will go into effect next July, but even then I doubt it will lead to “hundreds of millions” in new cybersecurity investments. As I wrote earlier, any entity that is spending large amounts of money on CIP-013 compliance is probably doing something very wrong. I’ve been working on almost nothing but CIP-013 compliance for a year, and I fail to see any reason for even large utilities to spend huge amounts of money on compliance (that is, anything close to the scale of what they spent coming into compliance with CIP version 5).

Literally all of the risk mitigation activities that I and my clients have identified for CIP-013 compliance are policies and procedures – either on the part of the utility or the vendor. Once you put in place the different parts of your mitigation program – RFPs, contract language, vendor questionnaires, procurement risk assessments, etc. – there is just about zero additional cost to add more mitigations. For example, if you’re already requiring vendors to answer a questionnaire with 10 security questions as part of their response to an RFP, asking them to answer 50 questions doesn’t add much more cost.

Of course, this is a good thing, since the CIP v5 rollout was just the opposite – it was hugely expensive, especially for the biggest NERC entities. However, I wouldn’t call that an investment in grid cybersecurity. A lot of people think that CIP compliance is mostly about buying and implementing software and hardware to enhance grid security. While there is certainly a portion of that, much more than 50% of CIP compliance spending goes to implementing processes and procedures.

The difference between spending on CIP v5 and CIP-013 is that v5 required huge investments in implementing some very prescriptive requirements like CIP-007 R2 (patch management) and CIP-010 R1 (configuration management), while CIP-013 – since it’s entirely risk-based – allows NERC entities to target whatever funds they have available toward mitigating the maximum possible amount of supply chain risk. In other words, the utility doesn’t have to go to the poor house in order to make a significant dent in the supply chain cyber risks it faces.[i]

However, I won’t deny that the power industry does need to make significant investments in grid security, mostly because of all the things that aren’t now required by the CIP standards (and probably never will be, absent a complete rewrite of the standards as risk-based). These include the need for much better network monitoring, the need to make much greater investments in preventing ransomware, the need to address new cloud security risks so NERC entities can start making much more use of the cloud for OT systems, and more. But probably the most significant is the need to start paying much more attention to securing the distribution grid, since that now seems to be the focus of the Russian attacks.

But here’s the rub: This spending would be on top of what utilities are now spending for grid security and CIP compliance. How deep is this well, anyway?

I think we’ve reached the point where we need to acknowledge that grid security is a national responsibility, and should be funded on a national basis. Of course, NERC entities will still have to spend lots of money out of their own pockets (which in most cases are ultimately the ratepayers’ pockets, but in many cases – e.g. the IPPs – every dollar spent comes straight from their bottom line). But these additional investments – and especially the investment in distribution security – need to be funded nationally. After all, the military bases and dams that the Russians (and Chinese) are probing have national importance.

However, at the same time we need to reform the CIP standards and compliance regime, so they are much more efficient and effective than they are now; if you’d like an overview of how I would do this (which doesn’t mention the national funding, but does include the other elements), you can listen to my recent webinar on this topic, or email me to see the slides from that webinar. Here’s the article:

The cursor slid across the Ukrainian grid operator's screen and clicked circuit breakers open, knocking out the lights to thousands of people outside Kyiv.
Someone outside the country was controlling part of its power grid.
Before that night, Dec. 23, 2015, hackers had never managed to douse the lights anywhere in the world. The first-of-its-kind cyberattack redefined the threats facing electric utilities and contributed to billions of dollars in spending on improving U.S. defenses.
The unprecedented cyberattack — later traced to suspected Russian hackers — blacked out about 250,000 people in western Ukraine. Grid operators at the three victim utilities were able to flip breakers by hand and restore power within a few hours, but it would be many months before they could trust computers in any of their control rooms.
The event was a shot across the bow for power utilities globally amid a rapid shift to so-called smart grid technology and internet connectivity. While these digital tools offer power providers the means to improve efficiency and gather reams of valuable data, they have also opened new pathways for hackers to break into critical infrastructure networks.
Impact
A month after the attack, a group of U.S. experts from the departments of Energy and Homeland Security, the FBI, and other agencies traveled to Ukraine to gather more information about what happened. They were joined by representatives from the private sector and the North American Electric Reliability Corp., which sets and enforces mandatory cybersecurity standards for the bulk U.S. power system.
Details of their visit trickled out several months later, when cybersecurity firms started to share a few public takeaways from the investigation. The findings set off alarm bells in U.S. homeland security circles: a Russia-linked hacking group had deployed a variant of the "BlackEnergy" malware to take control of Ukrainian computers and stage a systematic attack on the distribution grid around Kyiv.
A year later, the same hacking crew struck again, this time using highly specialized attack code dubbed "CrashOverride" to temporarily bring down a bigger target: a transmission-level substation north of Kyiv.
Ukraine's grid operator was again able to restore electricity within a matter of hours, but the episode drove home the potential real-world consequences of new dangers posed by connected technology.
NERC and the Federal Energy Regulatory Commission took lessons learned from the 2015 and 2016 Ukraine attacks and incorporated them into new cybersecurity rules for the bulk power sector. Changes to the so-called Critical Infrastructure Protection standards brought about hundreds of millions of dollars in new cybersecurity investments across the U.S. grid.
The attacks also provided a stark backdrop for the establishment of several new cybersecurity agencies in the intervening years, including DOE's Office of Cybersecurity, Energy Security and Emergency Response; DHS's Cybersecurity and Infrastructure Security Agency; and most recently a reorganization of cybersecurity functions at FERC this year.
Crystal ball
A cyberattack is not known to have cut out power to any part of the North American grid.
The only documented U.S. grid cyber disruption to have occurred was in March, but the attack was relatively unsophisticated and limited in scope. That "denial of service" incident caused a series of five-minute communications outages at several wind and solar farms in Utah, Wyoming and California, but stopped well short of causing any blackouts.
Despite the lack of any grid hacking disasters, top American intelligence officials continue to warn of dire consequences from cybersecurity complacency.
Then-U.S. Director of National Intelligence Dan Coats said Russia can cause temporary damage to critical infrastructure networks, "such as disrupting an electrical distribution network for at least a few hours — similar to [abilities] demonstrated in Ukraine in 2015 and 2016."
"Moscow is mapping our critical infrastructure with the long-term goal of being able to cause substantial damage," Coats said in his office's Worldwide Threat Assessment.
A successful takedown of even a small part of the U.S. grid would have far-reaching impacts for security policy, utilities' cybersecurity practices and — depending on who launched the attack — global statecraft.
But the combined efforts of U.S. power companies, intelligence and homeland security professionals are likely to offer enough of a bulwark against the most catastrophic lights-out scenarios.





























[i] The CIP-013 methodology I’ve worked out with my clients this year is designed to achieve close to the maximum amount of supply chain cyber risk reduction, given whatever resources the utility has available for the effort. If you’d like to learn more about this, drop me an email.


Opinions expressed in this post are not necessarily those of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.

Friday, December 20, 2019

Protect our Power 2020


In case you haven’t heard of Protect our Power yet, you should – and now I’m going to rectify that problem. PoP is an organization dedicated to securing the US grid. A key focus is best practices for cyber security, since they believe that the NERC CIP standards are inflexible and take far too long to change – and I won’t argue with that position!

Their annual Best Practices in Utility Cybersecurity conference is always held the day before Distributech begins, and in the same location. In 2020, the conference will be held January 27 in San Antonio, where Distributech opens the next day. I can think of four good reasons why you should attend:

  1. The conference has a great lineup of interesting speakers, which this year includes yours truly (whether I’m interesting is left as an exercise for the reader - my kids are divided on the issue). My topic will be “Supply Chain CIP-13 - Best Practices to pursue while accomplishing Compliance as a byproduct”. Of course, the dirty little secret of my presentation will be that, since CIP 13 leaves the content of the supply chain security risk management plan almost entirely up to the entity, there is no conflict at all between adopting best practices and compliance – indeed, they’re one and the same.
  2. Speaking right before me is the inimitable Monta Elkins, whose topic is “Vulnerability Disclosure”. Knowing Monta, and knowing how important this question is for supply chain security, I’m very much looking forward to hearing him speak.
  3. Distributech is an amazing show. Your $175 (!) fee for the Best Practices conference includes admission to the Distributech exhibition. And if you want to attend the Distributech conference, which runs at the same time in the same convention center, you will receive a 15% discount (I’ve attended the conference several times, and always found it to be very good, including a good cybersecurity track).
  4. But to be honest, my biggest reason for wanting to be at the show this year is…that it will be in San Antonio. It moves around among several cities, but San Antonio is really special. If you’ve never been there, I can promise you’ll be very impressed. The history is just great. The downtown has great architecture, very well preserved from about 100 years ago (although the city itself was founded in 1719, when the Mission San Antonio de Valero was established – now known as the Alamo –which is just about three blocks from the convention center), and the River Walk…well, I had always thought this must be some sort of hokey tourist attraction. When I finally saw it, I confirmed it’s definitely a tourist attraction, but it’s hardly hokey. It’s about 90 years old, and it’s simply beautiful (my favorite activity is running along it in the morning before heading to my day’s activities) – and wonderfully integrated with all of the hotels, restaurants, office buildings, etc. that open onto it.
Hope to see you there!


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.

Friday, December 13, 2019

CIP-003-7 and CIP-003-8



There have been many confusing developments in the history of the NERC CIP standards, but I have to say that the situation regarding CIP-003 in early 2020 really takes the cake. I’ll try to summarize it, going back to the beginning:

  • When FERC approved the CIP v5 standards in Order 791 November 2013, they indicated they didn’t think CIP-003-5 R2 was adequate. This is because it just listed four policies that needed to be developed for assets containing Low impact BES Cyber Systems (which literally everyone including me calls “Low impact assets”, even though that’s not technically correct). FERC wanted more concrete measures to be taken for Lows – i.e. they required NERC to flesh out the four policies with concrete requirements.
  • This became one of the main tasks of the CIP Version 5 Modifications standards drafting team, and they delivered it in CIP version 6. In v6, the four policies for Lows were moved from R2 to R1.2. R2 now became a requirement to have a plan for Low impact assets, which included the four sections in a new Attachment 1. But Attachment 1 is R2, for all practical purposes.
  • When FERC approved v6 in Order 822 in January 2016, they approved parts 1 (Cyber Security Awareness), 2 (Physical Security Controls) and 4 (Cyber Security Incident Response), but they said they wanted part 3 to be changed (while at the same time approving it). In practice, this meant that NERC would have to develop a new version of CIP-003 that included the change. NERC could still have implemented CIP-003-6, followed by CIP-003-7 when it was developed, but the Implementation Plan for CIP-003-7 stated that v6 wouldn’t come into effect (it still wasn’t in effect when v7 was approved). This means that when CIP-003-7 is implemented on 1/1/20, this will be the first time compliance with section 3 has been required.
  • CIP-003-6 introduced two new concepts. Low impact External Routable Connectivity (LERC) adapted the concept of External Routable Connectivity (which was developed with CIP v5, but only applied to High and Medium BCS) to apply to Low assets. Similarly, Low impact Electronic Access Point (LEAP) adapted the Electronic Access Point concept from CIP v5 to Lows. Section 3 in v6 essentially stated that, for every Low asset that had LERC, the entity needed to implement a LEAP – which essentially meant to install a firewall.
  • FERC’s objection was to the word “direct” in the LERC definition. I won’t discuss the objection now, but you can read the discussion in the post I just linked a couple bullet points ago.
  • In the same Order 822, FERC required NERC to develop a requirement for controls of Transient Cyber Assets (TCAs) used at Lows. This became Section 5 of Attachment 1 in CIP-003-7.
  • A new drafting team, the Modifications to CIP Standards team, was constituted to address FERC’s objections in Order 822, as well as a number of other items added to their plate by NERC (including virtualization, which the team is working on at this time). I attended their second or third meeting, which was held in early July 2016 in Chicago.
  • In that meeting, they – unexpectedly to me – decided that the only way properly to address FERC’s request was to radically revise Section 3. It now became essentially a risk-based requirement, requiring that the entity mitigate the risk to BCS posed by the presence of any external routable connectivity at the Low asset (even if only IT assets are routably connected externally). The highlight (for me, anyway) of the requirement was the set of “reference models” for different cases of external connectivity, found in the Guidance and Technical Basis. Since both the LERC and LEAP definitions were retired (without ever having been born in the first place!) when the v7 requirement superseded the v6 one, the reference models essentially became the new “definition” of ERC for low impact assets. The implementation date for CIP-003-7 was set for 1/1/20.
  • R3 will be implemented as drafted for CIP-003-7 on 1/1/20. However, when FERC approved CIP-003-7, they said they wanted a change in Section 5, regarding TCAs at Lows. Specifically, they were concerned about Section 5.2, regarding TCAs managed by a third party (e.g. a vendor).  This requires “..one or a combination of the following prior to connecting the Transient Cyber Asset to a low impact BES Cyber System…:” It is followed by five bullet points joined by “or”, each starting with the words “Review of..” – for example, “Review of antivirus update level”.
  • FERC’s objection was that just requiring the entity to “review” what the vendor has done to mitigate risks posed by TCAs isn’t enough – the entity needs to take some concrete steps to mitigate any residual risk, if they decide the vendor hasn’t done enough in this regard. Meaning, for example, that if the entity decides the vendor has done a bad job of controlling malware on a TCA, they should take steps like barring its use at Low assets.
  • Of course, this meant NERC had to develop a new version of CIP-003 to include this change. Fortunately, since the CIP Modifications SDT (which developed CIP-003-7) was still in operation, NERC assigned it the task of developing CIP-003-8 (and seeing it through the ballots and comments, of course). In due course, the team developed CIP-003-8, which is identical to CIP-003-7 except for a single sentence added to Section 5 of Attachment 1. This requies the entity to “determine whether any additional mitigation actions are necessary and implement such actions prior to connecting the Transient Cyber Asset.”
  • Because this was such a small change from CIP-003-7, the implementation date for CIP-003-8 was set for 4/1/20.

This is how we came to the situation in which NERC entities with Low BCS will implement CIP-003-7 on 1/1/20 and CIP-003-8 on 4/1/20, even though the two versions are close to identical. In fact, they really are identical, if you just apply a little common sense. In ordering the change that led to CIP-003-8, FERC seemed to be assuming that NERC entities, upon learning that a vendor doesn’t have good anti-malware controls for their TCAs, will completely ignore this information and won’t make the slightest attempt to restrict that vendor from using the TCA in a Low substation or generating plant.

Anybody with the slightest familiarity with how electric utilities operate (or almost any other company with a competent IT department) when it comes to preventing malware infections on their networks will know that the discovery that a vendor isn’t doing a good job of preventing malware on their TCAs will set off alarms all over the place, and the vendor will probably not be allowed to bring any laptop at all onsite until the utility is absolutely sure the problem has been fixed. In fact, the vendor might be terminated for this reason. The impact of a malware infection is so great that no competent organization would even consider not taking drastic measures in a situation like this.

In other words, the wording in CIP-003-7 was fine as it was. It would be nice if it had included more than just requirements for “review”, but it’s 100% certain that it would have been interpreted as requiring mitigation of any problems found in the review – both by entities and auditors. In fact, I think the problem was actually addressed in CIP-003-7. The five bullet points in Section 5.2 that begin with “Review” are followed by a bullet point that reads “(or) Other method(s) to mitigate the introduction of malicious code.” This could have been worded better, but I think it is essentially saying that, if the review turns up a problem, the entity must deploy other method(s) to fix the problem – like ban the vendor from the substation or plant altogether.

As it is, because FERC was so worried that NERC entities are, well, stupid enough not to do anything when they know a TCA may be full of malware, NERC was forced to do the following:

  1. The SDT had to debate the change and come up with a first draft.
  2. The draft had to be submitted to the NERC ballot body for a ballot and comments. It didn’t pass.
  3. A second draft had to be developed, which was then approved.
  4. It then was cleaned up by the legal staff, and submitted to the Board of Trustees for their approval.
  5. The Board approved it, and it was submitted to FERC for their approval.
  6. FERC finally approved it – this time without asking for any further changes! – earlier this year.
  7. Finally, the entities will need to implement CIP-003-8 on 4/1/20, even though it’s 100% certain that their procedures will need no change at all (although some documentation will have to be changed, and everyone will need to be reminded on 4/1/20 that from now on they’re complying with CIP-003-8, not CIP-003-7). And I can assure you that NERC entities with Low assets have spent a lot of time worrying about this change and whether it’s really as simple as it looks, or whether there’s actually some monster hidden in the added sentence, that’s going to jump out and bite them in an audit three years from now.
Tell me, do you think the Bulk Electric System will be one bit more secure when CIP-003-8 is implemented than when CIP-003-7 is implemented? I didn’t think so; neither do I. This was an entirely unproductive exercise. FERC shouldn’t have required the change in the first place. However, if NERC’s Rules of Procedure had allowed a minor change like this to be developed and inserted in a four-hour meeting (which is probably all that would have been required), rather than requiring an effort by a number of people spread out over about a year's time, FERC’s directive wouldn’t have required much work and time to address.

And here, Dear Children, lies the moral of our story: Nobody is to blame! Everybody was doing their jobs. FERC’s rules don’t allow for requirements that are badly worded, even though the real meaning will be crystal clear for everybody; they need to be changed, period. And NERC’s Rules of Procedure don’t allow a standard to be modified once it’s been approved by the Board of Trustees. Given that, it was inevitable we would go through this whole process, even though probably every single person involved realized it was all a waste of their time.

And since I’m a Big Picture sort of guy, here’s the Big Picture: This is the problem across the board with NERC CIP. A huge amount of time is spent (by NERC, FERC, the Regions, and NERC entities) developing, approving, complying with and enforcing requirements that are often wildly inefficient and – in cases like this – completely ineffective. But thank God, everybody is doing their job!


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013 specifically for your organization remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.

Wednesday, December 11, 2019

Open email to Karen Evans


After my last post, I received an email from two retired former high-level cybersecurity officials in DHS and DoE (both of whom I know, although not well), suggesting we talk. We did so today, and they impressed upon me the need to do something more than complain (which I must admit is kind of my default mode) about the lack of an investigation of the reported Russian cyber penetration of the US power grid.

They first suggested that I reach out to the Electric Sector Coordinating Council, so they could discuss the need for an investigation at their next meeting; however, I pointed out that I’m not on a first name basis with any CEO’s of major utility organizations. They then suggested that I send an email to Karen Evans to suggest that DoE itself investigate. This made a lot of sense to me – it’s certainly worth a try.

Below is the email I just sent to Ms. Evans (I will send similar emails to both Illinois Senators, Dick Durbin and Tammy Duckworth – since I reside in Illinois). I’ll let you know if I receive a substantive response. I hope I do.


Dear Assistant Secretary Evans:

I am a longtime cybersecurity and compliance consultant to the electric power industry. I was very impressed with your speeches to NERC GridSecCon 2018 and 2019. This year, I was especially impressed that you pointed out to the audience the passage in the 2019 Worldwide Threat Assessment, which states that the Russians currently have the ability to “…generate localized, temporary disruptive effects on critical infrastructure—such as disrupting an electrical distribution network for at least a few hours similar to those demonstrated in Ukraine in 2015 and 2016.”

This has been said in different terms by DHS, the Wall Street Journal, Symantec and the former Deputy Director of the NSA, as pointed out in my recent blog post discussing your speech (this post and its predecessors received a lot of attention in the power industry).

While you didn’t discuss this passage in your speech, it seemed you were urging the industry to take some sort of action. I agree that the industry should do this, but currently they know nothing about a) the identifiers of the malware that has presumably been implanted in their networks by the Russians; or b) how the Russians were able to get in to implant it.  If they knew the former, they would hopefully be able to find and root out the malware from their networks; if they knew the latter, they would be able to protect their networks from further penetration by the Russians.

However, neither the WTA nor the other sources I mentioned provided any of this information. It can only be obtained through an investigation of the electric utility networks that may have been affected. But here is the amazing part: No organization (governmental or non, although one would normally expect the Federal government to take the lead in doing this, as they did for the Ukraine attacks) has even launched such an investigation, let alone produced this information.

This contrasts remarkably with the Ukraine attacks. In both cases, investigators from multiple organizations in the US (including DoE) jumped on planes for the Ukraine seemingly within hours of the news of the attacks. Within days, they were producing various reports to the power industry. Within weeks, they were conducting both classified and unclassified briefings across the country, to let the industry know what to look for on their networks and how the attacks were perpetrated, so they could remove the malware and strengthen their defenses against similar attacks here.

In marked contrast to the Ukraine attacks, the WTA has been out since January, and there have been no investigations, no reports and no briefings (classified or unclassified). And of course, in this case we’re talking about an attack on the US, not a foreign country! This is beyond bizarre. Of course, one big difference is that the Ukraine attacks caused outages, whereas the attacks on the US haven’t done that yet (as far as we know). Does this mean our policy is to wait for the Russians to cause outages and then investigate? If so, this is a very sorry state of affairs.

As the Sector-Specific Agency for the US electric power industry, I respectfully suggest that DoE should undertake this investigation. Perhaps the investigation will determine that the reports were all misinformed and the Russians haven’t been able to place malware in the US power grid; this would definitely be the best result. But until this is done, the power industry is going to live under the suspicion that the grid can’t be trusted because it’s riddled with malware. This will lead to more proposals like Richard Clarke’s (mentioned in my post linked at the beginning of this email) that we spend hundreds of billions, or even trillions, of dollars building a completely “clean” and safe grid. This is of course an incredibly huge effort, but how can we be sure it isn’t needed, if we don’t investigate the government’s own statements?

Of course, I will be pleased to discuss this further with your or your representatives.

Respectfully yours,

Tom Alrich
  


 As always, you can discuss this post with me by emailing tom@tomalrich.com.




Friday, December 6, 2019

News from the Russian front


One of my favorite experiences during NERC GridSecCon 2018 was hearing from Karen Evans, who earlier that year had become Assistant Secretary of DoE and head of the then-new DoE Office of Cybersecurity, Energy Security, and Emergency Response (CESER). She had received a lot of very good press, partly due to her appearances before Congress when she came into that role. And she didn’t disappoint when she spoke last year. She is quite dynamic, but also clearly someone who doesn’t just talk a good game, but executes a good game as well.

She returned to GridSecCon this year, and once again made a very good speech. I was most struck by one thing that she urged her listeners to do: read the bottom of page 5 and the top of page 6 of this year’s Worldwide Threat Assessment, which was presented to the Senate in January by the directors of the FBI, CIA and Office of National Intelligence. Here is the section that she referred to:


“We assess that Russia poses a cyber espionage, influence, and attack threat to the United States and our allies. Moscow continues to be a highly capable and effective adversary, integrating cyber espionage, attack, and influence operations to achieve its political and military objectives. Moscow is now staging cyber attack assets to allow it to disrupt or damage US civilian and military infrastructure during a crisis and poses a significant cyber influence threat—an issue discussed in the Online Influence Operations and Election Interference section of this report.

Russian intelligence and security services will continue targeting US information systems, as well as the networks of our NATO and Five Eyes partners, for technical information, military plans, and insight into our governments’ policies.

Russia has the ability to execute cyber attacks in the United States that generate localized, temporary disruptive effects on critical infrastructure—such as disrupting an electrical distribution network for at least a few hours—similar to those demonstrated in Ukraine in 2015 and 2016. Moscow is mapping our critical infrastructure with the long-term goal of being able to cause substantial damage.” (my emphasis)


Ms. Evans didn’t say much if anything about this passage, except that everybody should read it. Of course, the last paragraph is the one that she was undoubtedly most concerned about.

This isn’t news to any of us, so why I am I even bothering to bring this up now? Before I tell you why, I want to point out that this isn’t the first set of disturbing reports about Russian cyber activity against the US power grid. The other reports include:

  1. DHS’s briefings on Russian supply chain attacks on the power industry in July 2018.
  2. A Wall Street Journal article in January that a) described a different wave of Russian attacks through the supply chain, this one utilizing phishing emails, and b) quoted Vikram Thakur of Symantec as saying that “..his company knows firsthand that at least 60 utilities were targeted, including some outside the U.S., and about two dozen were breached. He says hackers penetrated far enough to reach the industrial-control systems at eight or more utilities.” (my emphasis).
  3. E&E News reported in May that 200,000 “implants” (i.e. pieces of malware) had been installed in US water, gas and oil, and electric power infrastructure, according to the former deputy director of the NSA. Who did this, you ask? Who else, but our good friends in St. Petersburg and Moscow?

Given this, if you dropped in the US from say Mars, you would be amazed if I told you there has been no activity (discernible by myself or anybody else I know, which includes a number of people with security clearances and an indisputable need to know about any malware implanted in the grid) to root out this malware that has been implanted, or at least to investigate whether the reports are true or not.

Of course, it’s possible that all of the people mentioned above have been misled in some way, or they just don’t have the technical knowledge required to make statements like this – and there is no truth at all to these reports. That’s why I’ve repeatedly called for an investigation by some body (part of the government or quasi-government, like NERC), to find out once and for all whether these reports are true. Maybe they’re all completely false, in which case everybody can sleep well from now on (or at least this will be one thing that won’t keep us awake at night. The Lord knows there are lots of others!). But until there’s an investigation, we have to believe there’s some truth to them, and the Russians could cause power outages in the US at any moment.

But if you’ve been reading this blog for a while, you must know that my calls for an investigation have fallen on totally deaf ears. I’ve heard no confirmation that any organization is investigating this, or that any organization is even considering doing so. Again, why am I bringing this up again? Why don’t I just drop the subject and do like a lot of others seem to be doing nowadays – making my accommodation with Russia, since they seem intent on having their way with us and we seem intent on letting them do that (of course, that’s natural. Their economy is about half the size of California’s, but they do have one thing that California doesn’t – a huge nuclear arsenal)? And here they even tried to give me a medal, which I would have accepted if they hadn’t asked me to come to the Russian embassy to accept it. Knowing what happened to Mr. Khashoggi at the Saudi embassy in Istanbul, I decided that the medal wasn’t worth it.

So what has changed? Yes, Karen Evans pointed out the WTO story for the .002% of the people in the hall who hadn’t already heard about it – but does this bring us any closer to an investigation? I’ll admit it probably doesn’t, but what I found significant is that it demonstrates conclusively that the two biggest reasons people have proffered to me for the lack of an investigation are invalid.

I’ve heard a lot of reasons why there’s no investigation (most of which are put forth by people who are naïve, but some of which may point to a murkier motive), and I hope to write a post one of these days listing them all – I’ve heard at least 15 so far. However, by far the two most common reasons are:

1.       There’s been an investigation, but the results are classified; and
2.       Appropriate agencies are actually working on this, but they’re not at the point yet where they can reveal any findings.

Both of these reasons can be easily debunked, but Ms. Evans’ talk in October did that for me. She wouldn’t have asked everyone to look at the WTA if she’d thought either one of these was true. If either one was true, she would have definitely known about it.

So why did she bring this up? Is she thinking we all need to press harder for an investigation? There’s clearly nothing the industry can do from a technical point of view – that they’re not doing already, of course – without knowing something about the malware that’s implanted and how it got there. It’s not likely the Russians named the malware files Russianmalware1, Russianmalware2, etc. The industry got specific information on the malware – in unclassified and classified briefings – within a few weeks (if not less time) of the Ukraine attacks. But here we are almost 11 months after the WTA came out, and there hasn’t been any word at all about the malware referred to in that report. And the WTA is talking about threats to the US, not the Ukraine!

So I assume Ms. Evans wants us to press harder, and I’m happy to oblige her. In fact, I’d like to press her on this.  One of the agencies that would be near the top of my list to do this investigation is Idaho National Laboratories, which is of course part of DoE. Why doesn’t she talk to them about doing it (although I know she doesn’t have direct authority over INL)?[i]

If I hear anything more on this, you’ll be the first to know!


Postscript
You might be inclined to think that it’s no big problem if these reports aren’t investigated, since nobody – in the power industry or the general public – seems too concerned about them. But here you’re wrong. This was pointed out to me by a book review that appeared in the Wall Street Journal on August 8 (I’ve had the print copy sitting on my desk since then, thinking I’d soon get time to write about it).

The review was of a book called “The Fifth Domain”, by Richard A. Clarke and Robert K. Knake. It’s about foreign cyberattacks on US private infrastructure. It contains this paragraph:


“The authors propose a new backup national power grid that would not be connected to the internet. Without it, they say, the U.S. is defenseless against “somebody like the Russian GRU, engaging in a cyberattack that would technologically revert us to the nineteenth century, but without all the equipment that people in the nineteenth century had to deal with life in a society without electricity.”


In other words, the authors of this book (and Richard Clarke is a very well-known figure with lots of high-level government experience) believe the US grid is so untrustworthy that we need to take the drastic step of building an entire backup grid that won’t be connected to the internet and therefore isn’t likely to be infected with all of the malware, etc. that the current grid is infected with.

Of course, this proposal is very unlikely to get anywhere, since it would require an absolutely enormous expenditure. But if sophisticated, well-connected people like Richard Clarke believe this needs to happen – in part because of reports that the current grid is already riddled with Russian malware – it’s almost inevitable that sooner or later there will be some call for other steps, such as taking the security of the grid completely away from NERC and FERC and handing it to the military, which will meet with real approval. And at that time, it will be quite hard for the power industry to argue that it’s absolutely sure the grid is very secure – except, of course for all the reports that say it isn’t, which haven’t been investigated at all. "Just trust us: Other than the malware discussed in those reports, our grid is completely secure!"


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.


[i] I might have asked this question after her talk, but there was no time for questions.