Monday, September 30, 2019

The SCWG white papers are up!



I started talking about the NERC CIP Committee's Supply Chain Working Group in this blog back in March, including in this post. In that post, I described the five white papers the SCWG was working on, and I confidently stated that they would be published in June. Well, the wheels of NERC grind slowly, but they grind fine (most of the time). Here it is the last day of September, and they were posted today on NERC’s web site. To find them, go here and drop down “Current/Approved Security Guidelines” under “CIPC Security Guidelines”. Then drop down the “Supply Chain” item.

You’ll see a list of ten PDF files. Each of the five white papers (which are called Guidelines now, of course) has both the paper itself and slides describing the paper, which we presented before the CIPC meeting in Orlando in June.

I led the development of two of the papers: Risk Management Lifecycle and Vendor Risk Management Lifecycle. However, all of the papers are worth reading, whether your organization has to comply with CIP-013 or not; they were all put together in many meetings among SCWG members, including NERC entities and vendors. There are two more papers in the approval process, regarding risks related to cloud computing and vendor cyber incident response plans. These have both been finalized by the SCWG sub-groups that drafted them; however, given the normal approval process (including a comment period for CIPC members), I’d say those probably won’t be out until the end of the year.

The SCWG does plan to create more papers, but they definitely won’t be published this year. In fact, since these papers took six months from start of development to posting, this means any further papers might not be posted before July 1, 2020, when CIP-013 compliance is due.

However, this isn’t the tragedy that I once would have thought it to be. Since CIP-013 R1 just requires the entity to develop a supply chain security risk management plan (which includes the six mitigations in R1.2), almost any plan you develop by 7/1/20 will be compliant. However, every NERC entity should be thinking about more than just CIP-013 compliance, since IMHO supply chain security is the number one cybersecurity problem in the world today, especially for the electric power industry (look at Target, Stuxnet, NotPetya, Delta Airlines, and now Airbus. Look at the Russian attacks on the grid through the supply chain, as detailed by DHS and the Wall Street Journal. Look at last week’s article in the Times by Bruce Schneier).

Even if you think it’s too late to change your course for compliance on 7/1/20, you can still make changes to your CIP-013 program after that date; you just need to document why and what you’re changing. The point is that you want to make your plan as efficient (i.e. it mitigates the most supply chain cyber risk possible, given your available resources) and effective (it mitigates the risks that are most important for your organization’s BES Cyber Systems, which will usually be different from another utility’s) as possible; it’s never too late to do that, and the auditors certainly aren’t going to complain if you make a midcourse correction after 7/1/20 to improve your plan’s efficiency and effectiveness.

I’ve heard that one or two major utility organizations are planning on investing huge sums and lots of hours in CIP-013 compliance.  If that’s true, I suspect that they’re doing something wrong. Everything I’ve seen so far with my CIP-013 clients leads me to believe that mitigating the risks in CIP-013 is almost entirely a matter of policies and procedures, both for your organization and for your vendors/suppliers. Nothing requires you to install expensive systems or deploy legions of consultants. This is very different from the CIP v5 rollout, since CIP-013 is very different from CIP-002 through CIP-011.

A disclaimer on each paper says (with my correction of two improper verbs, which I hope will be corrected soon on NERC’s web site) “Reliability guidelines are not binding norms or parameters to the level that compliance to NERC’s Reliability Standards is monitored or enforced. Rather, their incorporation into industry practices is strictly voluntary.”
I’ll translate this from NERC-speak (and I’m a fluent translator of NERC-speak, although I’ve so far resisted trying to speak it on my own. I’ll leave that to the experts at NERC): It says there’s nothing in these guidelines (or ones to come) that constitutes some sort of binding guidance on NERC entities or auditors as they comply with/audit CIP-013; in fact, the only document that is officially designated as guidance for CIP-013, the Implementation Guidance prepared by the Standards Drafting Team, doesn’t provide anything that’s binding either.
The SCWG’s papers are guidelines to help you develop your plan; they’re not there to tell you how to comply with CIP-013. But guess what? Complying with CIP-013 requires developing and implementing a good supply chain cyber security risk management plan. If you develop a good plan and implement it properly, you’re compliant. Period, end of story. With CIP-013, unlike with any of the current CIP standards, compliance = security. And if that doesn’t make you happy, I don’t know what will.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, has received a great response, and remains open to NERC entities and vendors of hardware or software components of BES Cyber Systems. To discuss this, you can email me at the same address.


Friday, September 27, 2019

Another big supply chain breach



Kevin Perry, retired Chief CIP Auditor of the SPP Regional Entity, sent me today a link to this article about a breach at Airbus in which a lot of technical data was exfiltrated, presumably to China. By the way, I love this wonderful formula for success: stealing your way to technical competence. I don’t know why I didn’t think of that earlier in life, when it might have done me some good.

Of course, this breach came through the supply chain, in fact through multiple vendors. The attackers were able to penetrate the suppliers’ networks (evidently not too hard to do), and were able to get remote access to VPNs connecting the suppliers to Airbus (of course, it’s highly unlikely the VPNs were also protected with multi-factor authentication).

And in case you might be inclined to dismiss this attack as not too relevant to critical infrastructure - since of course it was data, not operations, that were affected - I want to point you to the last paragraph of the article, which noted that a debilitating malware outbreak at a key supplier this year had an impact on Airbus’ production (this is one downside of just-in-time procurement, of course).

Whenever a NERC entity (or really any organization) hears about an attack on another company, the question to ask is: What is the threat that’s the basis of this attack, and how can we mitigate it ourselves? This question applies especially to CIP-013 entities, since R1.1 requires you to “identify and assess” supply chain security risks (although I prefer “threats”, following NIST 800-30) to the BES? Here’s one you should identify!

The threat is that a supplier’s network will be penetrated by a malicious third party (or by a rogue insider), and they will be able to gain access to your network through vendor communications channels to your environment. Moreover, it doesn’t matter that much whether it’s your IT or your OT network that’s penetrated. If your IT network is owned by someone like the Chinese, they will ultimately figure out a way to get into the OT network - as in the first Ukraine attack, when the Russians were in the Ukrainian utilities’ IT networks for many months before they realized the HMIs for their substations were placed on the IT network. This was of course a gift to the Russians due to poor security practices by the utilities, but if a nation-state attacker is in your IT network long enough, they will very probably find a way into your OT network.

Of course, this is the second case in two posts where there’s a threat to utilities (or any organization) that results from penetration of a supplier’s network by the bad guys. In my last post, the threat was that the bad guys will penetrate the development or manufacturing environment of a supplier and substitute a hardware or software component that contains malware. This time, it’s that they’ll utilize the supplier’s access to your network to penetrate your ESP and cause havoc.

In my opinion, NERC entities should strongly consider mitigating threats that result from penetration of suppliers’ (and vendors’) networks as part of their CIP-013 plans. These aren’t idle threats, of course. Keep in mind that last summer DHS raised the alarm in a webinar that at least 200 suppliers to the power industry had been penetrated by the Russians, and that the attackers had success in penetrating power industry players that way (although that part was walked back by DHS two days later. But that walkback was contradicted by statements that had been made in a second DHS webinar that very morning. Moreover, DHS walked back the first walkback a week later - at a briefing in front of Mike Pence, Rick Perry and Kirstjen Nielsen, at the time Secretary of DHS - with a new story that was incompatible with the previous ones. I heard a third walkback at a meeting in McLean, VA last September. And to top it off, when I questioned a DHS official during a meeting at the RSA Conference this March, he started stammering out a bunch of strange excuses for this and for not investigating the Worldwide Threat Assessment. I haven’t had time to discuss this odd evening yet, but a reporter from Dark Reading did write about it, without knowing my name. Does it seem there might be something strange going on at DHS?).

And don’t forget the Wall Street Journal article from January, which detailed how the Russians are using phishing emails to penetrate supplier networks, and through them utilities. So it’s very important to take steps to make sure your suppliers are themselves following good security practices. How do you do this? There are lots of ways. Contract language is one, but hardly the only one. RFP terms and questions are another.

I think vendor questionnaires are an excellent tool, since they not only provide you with information on what your supplier is doing, but they let the supplier know what steps you think they should be taking. And having annual “security reviews” with important suppliers allows you and the supplier to freely exchange information about your security practices, as well as your mutual security objectives. The best thing about these is they shift the discussion from an adversarial one to a partnership, where you’re working with them to secure both of your environments.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, has received a great response, and remains open to NERC entities and vendors of hardware or software components of BES Cyber Systems. To discuss this, you can email me at the same address.


Wednesday, September 25, 2019

Bruce Schneier on supply chain risk



The Times ran a very readworthy article by Bruce Schneier today. In case you don’t know, Bruce is one of the great figures in cybersecurity. I’ve sometimes felt his articles were a little overblown in the past, but this one is spot on. He isn’t saying that all computing or telecomm equipment that we buy nowadays is riddled with back doors, but rather that there is currently no purely technical solution to this problem – so even if everything we buy has a backdoor, it’s very unlikely we’d ever know it, until of course bad things start happening.

While he doesn’t take the next step, I will for him: Organizations need to start assuming there are back doors in almost all hardware and software they buy, and perform a risk assessment before installing any hardware or software that might perform a critical function, such as – need I say it? – any system that could impact the Bulk Electric System.

A lot of the discussion on CIP-013 compliance has focused on contract language, and specifically language implementing requirements R1.2.1 – R1.2.6. This is understandable, since the path to successful NERC CIP compliance up until CIP-013 (with a few exceptions like CIP-014 and CIP-003-7/8) has been to be laser-focused on the exact wording of the requirements.

But let’s be clear: R1.2.1-R1.2.6 don’t do anything to mitigate the backdoor problem (R1.2.5, covering authenticity and integrity of patches and other software obtained electronically, makes sure that what you download is exactly what the supplier developed. But it doesn’t cover the case that the supplier’s development process itself was compromised, as happened to Juniper Networks in 2015 and to Delta Airlines last year – and which is of course what Bruce is writing about in this article).

R1.1 requires you to “identify and assess” supply chain risks to BCS. Since there are at least thousands of those risks, your job (Mr/Ms NERC entity) is to determine what are the most critical risks and mitigate those. The risk of backdoors in hardware and software will hopefully be among the risks that you choose to mitigate.

The tools for protecting your BES Cyber Systems (or any system, of course) from backdoors are all procedural: Don’t buy from unauthorized or unknown vendors; make sure hardware products are shipped securely to you (tamper tape, etc); perform a risk assessment before installing or upgrading any BCS hardware or software; etc. And not only should you follow those procedures, you should require your suppliers and vendors to do the same, through contract language and other means. In fact, you should also require your suppliers and vendors to have a supply chain cyber security risk management program, so that they require these same procedures of their suppliers.

Even though Bruce’s article isn’t aimed at the power industry, it’s one of the two industries he specifically mentions in his concluding sentence: “The risk from Chinese back doors into our networks and computers isn’t that their government will listen in on our conversations; it’s that they’ll turn the power off or make all the cars crash into one another.”


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, has received a great response, and remains open to NERC entities and vendors of hardware or software components of BES Cyber Systems. To discuss this, you can email me at the same address.


Sunday, September 22, 2019

Lew Folkerth: Everything you always wanted to know about CIP-013, but were afraid to ask! (part 1)



I must admit this post is a little late. Lew Folkerth of RF published his most recent (and perhaps, for the moment at least, his last) article on CIP-013 in the RF newsletter almost two months ago. Usually, I put up a post on anything he says between 3 nanoseconds and a day after the newsletter comes out (OK, the posts been averaging longer than that – but why spoil a good story?). But what can I say? I enjoyed the summer while it lasted, and I’ve been busy with…what else?...CIP-013 clients.

You can find the article here. In fact, when you download that PDF you’ll also receive – absolutely free! – the full set of five articles that Lew has written on CIP-013 and supply chain security (such a deal!). I’ve asked Lew to autograph a set of those PDF’s, which I could sell for $39.95 on late-night TV, but I haven’t heard back from him on that attractive proposition. Probably some silly NERC ethics rule or something like that. Or perhaps he still hasn’t figured out a way to autograph a PDF file.

Lew’s article was in theory going to be about CIP-010-3 R1.6, which will come into effect at the same time as CIP-013-1 itself and essentially repeats CIP-013 R1.2.5 – although with a big (and quite unfortunate, IMHO) difference. However, upon reading this article, I realized that, while it leads with the discussion of R1.6, it doesn’t say anything notable about it, other than to repeat the requirement language.

Of course, this isn’t necessarily a bad thing; perhaps there’s nothing too important to say about CIP-010-3 R1.6. However, the rest of the article provides a lot of important information. In it, Lew responds to a number of questions that NERC has received on CIP-013 (and if you aren’t clear on this, RF and the other five Regions – yes, with the recent dissolution of FRCC, there are now a total of six Regions, vs. eight two years ago - are part of NERC). And his answers are very much worth paying attention to, if your organization will have to comply with CIP-013. I’ll discuss each one of them.

The first question he answers is “How many levels (tiers) of vendors must an entity consider for CIP-013-1 Compliance?” Of course, this refers to the fact that each of your organization’s immediate vendors (and if you haven’t yet decided to distinguish between vendors and suppliers, I highly recommend you consider it. It will somewhat complicate procedure writing – as I’ve found with my clients – but it provides a huge advantage, in letting you target your mitigations much more efficiently and effectively, as I pointed out in this post) has their own vendors that contribute to the product delivered to you. And those vendors have their vendors, etc. Where do you draw the line? Do you have to go all the way back to the company that mined the sand that was used to make the silicon chips?

Lew correctly begins his answer by pointing out that CIP-013 doesn’t provide any guidance on this question – or just about any other question you might have about it, of course. But he goes on to provide what I found to be a very surprising but provocative answer. Before I tell you his answer to this, I want to tell you what I’ve been telling my clients, since it provides a good contrast with Lew’s approach. Since the question of third-party software risk is one I’ve looked at a lot as part of the NERC CIPC Supply Chain Working Group (SCWG), for which I led the development of a white paper on Vendor Risk Management (which I’m told should be posted on NERC’s website soon, along with four other papers we developed earlier this year), and since it’s one that I’ve discussed with my CIP-013 clients as an important supply chain risk they should consider for mitigation in their R1.1 plans, I hope you’ll excuse me if I delve into this in  some detail. I find this to be a really interesting topic.

Tom’s approach to third party software risk
I’ve been telling my clients that they shouldn’t even try to reach out directly to any entities other than their current suppliers. What they need to do is make sure they’ve identified the important risks that arise through third-party suppliers, and work with (or lean on, since that will often be necessary) their direct suppliers to mitigate those risks.

For example, consider the risk that a third-party software developer, which provides code that your supplier incorporates into their software product, identifies a serious vulnerability in their code and issues a patch for it – yet your supplier doesn’t apply the patch. This means the software product they ship you (or already have shipped you) will contain that vulnerability, just as if it were in your supplier’s own code. Clearly, when the third party issues their patch, your supplier should quickly incorporate it into their product. How can you ensure this will happen?

One possible step is to include in an RFP and/or contract language the provision that your supplier must patch their own code within say a month of receiving a patch from the third party. Of course, there’s no question that, if the vulnerability were in your supplier’s own code, you would want a patch much more quickly than that! So does this mitigate the risk?

It is certainly advisable to include such a provision in your contract language, but there’s one problem: how will you verify your supplier is doing this (of course, you always need to take some steps to verify that your supplier is honoring the contract terms they agreed to. This is a matter of both good security practice and CIP-013 compliance)? You certainly know who your supplier is, but do you know all of their suppliers – specifically, all of the third parties from which they purchase code to include in their product?

Wouldn’t it be nice if the developers of the software that runs your BES Cyber Systems disclosed their ingredients in the same way that every jar of applesauce you buy does? Yet how often do you see this list of ingredients on a developer’s web site (and from everything I know, the practice of a software developer purchasing code developed by third parties and compiling it into their own product is widespread. The considerations are very similar for pieces of open source software that end up being included – there are hopefully patches for those as well)? The answer is probably never.

Does this mean you simply have to trust your supplier to always apply third party patches – i.e. you can’t do anything more than that? No, you can ask for something that’s being discussed a lot nowadays (there were at least four or five sessions on the concept at the RSA Conference this year): a software bill of materials (SBoM). This is just like it sounds: The supplier tells you the name, supplier and version number (and perhaps other information as well) of every piece of third-party code in their product. Armed with that information, you can research vulnerabilities or other issues in each piece of third party code in the product you just bought (or are considering buying). Even more importantly, you can get on the third party’s email list for information on new patches they have issued – and send a friendly email to your supplier asking them when they plan to apply that patch to their product.

But there’s another risk associated with third-party software suppliers: What if there’s a vulnerability in their software (especially one that becomes publicly known) that they don’t patch? How do you mitigate that risk? You will of course want to have contract language with your supplier saying they will develop a mitigation for any unpatched vulnerabilities in third-party code, just as they should for any unpatched vulnerabilities in their own code.

But once again, you can’t simply count on contract language to mitigate this risk. To do this, you will also need an SBoM, and you will need to be notified when there is such an unpatched vulnerability (through one of the services that does this, since the supplier itself isn’t likely to do it). And you will also need to send an email to your supplier (or even call them. Does anybody here remember when the main way you communicated with somebody in business was to use your phone? I’m told it used to happen in the dark period sometime after the Spanish-American War, but of course I’m way too young to remember anything but email or texts J) asking when they’ll have a mitigation available to address this vulnerability.

So it’s all pretty simple, right? Just get the SBoM from your supplier – that’s the key to mitigating these two important supply chain risks. I thought it would be simple, too, until I started having discussions with suppliers in the SCWG.  When I did that, I was surprised by some of the pushback I received from a couple very respected vendors (whose dedication to good cybersecurity is unquestioned in the industry). Of course, it’s hard to know their motive in opposing this, but I believe it’s out of a belief that incorporating this third party code in their products gives them a competitive advantage, and fear that disclosing information about this will allow their competitors to overcome that advantage.

Of course, there’s no way that an individual utility (except perhaps one of the very largest utility organizations) could force a supplier to provide them an SBoM. I do think this should be included as one element of the certification that NERC has said they would like to develop for vendors in the next year. The vendors wouldn’t be required to provide SBoMs, but they might find their score on the certification – if it comes to be – somewhat lower if they don’t do that. It will then be their choice whether or not to start offering one.

Of course, there isn’t any sort of near-term fix for this issue. I suggest that, if a supplier refuses to provide an SBoM, your contract terms on this issue make it clear that you consider this an important risk (assuming you do, once you’ve assessed all of your supply chain security risks to the BES and identified the ones that are important enough for you to mitigate), and that you perhaps require some sort of indemnification if a third party vulnerability goes both unpatched and unmitigated (but here I’m playing Tom Alrich, Boy Lawyer. I don’t provide legal advice! Or at least legal advice worth heeding).

Lew’s approach to third party supplier risk
Lew says the following about this topic:

My recommendation is that you should know as much about your equipment, software, and services as possible. I suggest that you document as much as you can about your BES Cyber Systems and their makeup, using your CIP-010 baselines and expanding on each baseline with as much detail as you can gather. From this information you can compose a list of hardware, software, and services that are used in your systems.

You can then assess your hardware, software, and service list based on risk. For example, you would probably assess the cyber security risk of a server power supply as very low. You would probably assess the cyber security risk of a network-connected out-of-band server management device as high or severe.

You should then be able to create a list of vendors of your devices, software, and services, and prioritize that list based on the assessed risk of each component a vendor supplies.

Of course, Lew is focusing more on third-party hardware in this discussion, whereas I’ve been focusing on third-party software. As such, our two approaches are very much complementary. I must admit it didn’t occur to me that a NERC entity would identify some of the higher-risk hardware components of say a server, and decide the risk posed by each component. But it’s possible to identify third party hardware components, since you can see them in a server or workstation, and hopefully identify their suppliers as well.

Once you’ve identified important third-party hardware components, you can reach out to their suppliers and ask to be notified of new patches to software or firmware, as well as other defects or unpatched vulnerabilities. And again, you should inform your supplier of the components that you consider most important, putting them on notice (perhaps through contract language, but perhaps through other means as well. Of course, contracts aren’t renegotiated every time a new vulnerability is identified) that you expect them to quickly address any issues that arise in these components.

When I started this post, I was thinking I could discuss all five or six questions that Lew answers in his article in one post. Looks like I was wrong about that. The remaining questions (most of them having to do with what is in scope for CIP-013 as of 7/1/20, and what isn’t) won’t require nearly as much time as this one, so I hope to discuss all of them in my next post (assuming the Russians or the cloud don’t kidnap me, as they’ve been doing recently).


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, has received a great response, and remains open to NERC entities and vendors of hardware or software components of BES Cyber Systems. To discuss this, you can email me at the same address.

Thursday, September 19, 2019

My panel at GridSecCon



The E-ISAC has chosen the panelists for the panel that I’ll be moderating at GridSecCon on October 23, and it’s a very interesting one. It includes:

Howard Gugel, Director of Standards Development at NERC
Francois Lemay of Hydro Quebec
Bryan Owen of OSIsoft
Dave Whitehead, Chief Operating Officer of Schweitzer Engineering Labs
Ginger Wright of Idaho National Labs

Of course, I (and the E-ISAC) am very pleased that SEL’s COO will be joining us. This is clearly an indication of the importance that SEL places on getting their message across to the electric power industry. The initial focus of the panel was going to be on utilities themselves, but someone made the decision to instead focus on the vendor side, since vendors haven’t so far had a forum like this to answer questions on how they will work with their customers to implement their supply chain security risk management programs.

It will be a good discussion!


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, has received a great response, and remains open to NERC entities and vendors of hardware or software components of BES Cyber Systems. To discuss this, you can email me at the same address.


Thursday, September 12, 2019

A new dimension to the second Ukraine attack

The folks at Dragos are once again finding some really interesting stuff, this time about what could have been the real intention of the Russian attackers in the second Ukraine attack. And since this is in Wired, it's a very well-written article.

Monday, September 9, 2019

Are you still unsure what it means to comply with CIP-013?



If so, you’re certainly not alone. Many NERC entities are still trying to work this out. I’ve been helping multiple entities develop their CIP-013 programs for almost the last year, and in doing that I’ve developed a methodology that applies to a NERC entity of any size or type. Now I’m offering a free two-hour webinar to any NERC entity that wishes to learn about this. The webinar is open to any employees involved with CIP-013 compliance and/or BES supply chain security risk management.[i] 

CIP-013 fundamentally requires that a NERC entity do four things:

  1. Develop a supply chain cyber security risk management plan for its BES Cyber Systems (R1);
  2. Include the six items in R1.2 in the plan, along with risks identified in R1.1;
  3. Implement the plan (R2); and
  4. Review the plan every 15 months (R3).
This is all there is in the language of the three requirements. A lot of people find it very hard to believe there isn’t a lot more hidden in CIP-013, and – especially if they’ve had unfortunate audit experiences with the current CIP standards – they dread that an auditor will come onsite three years from now and nail them for something they had no idea was required.

But those people are asking the wrong question. The question isn’t what needs to be in the plan in order for it to be judged compliant. Because R1 provides very little detail on what should be in the plan, there are lots of different plans that could be compliant. As long as an entity puts thought and effort into their plan (and includes the few things that are required, including the six items in R1.2), it will be compliant, at least for the first two or three years of enforcement.

So if compliance isn’t the issue with CIP-013, what is? CIP-013, which is described as a “supply chain cyber security risk management” standard, is at basis about risk management; the subject matter of the risks it deals with is supply chain security. What’s important is a) managing your supply chain risks (and of course, we’re just talking about risks to Medium and High impact BES Cyber Systems) so that you mitigate only the most important risks to you and accept the others; and b) targeting your mitigation efforts so that you only mitigate risks in cases where they haven’t already been mitigated, by you or your vendor.

So the goal of my methodology isn’t just compliance – almost any well-thought-out methodology will be compliant – but mitigating only the most important risks and targeting mitigations so they aren’t wasted. Doing this will ensure that your organization mitigates the greatest total BES supply chain risk possible, given the resources you have available. You definitely don’t want to apply all or most of your resources toward mitigating risks that aren’t very important for your organization and its BES assets, or that have already been mitigated.

This webinar is open to anybody in your organization that may be involved in the supply chain risk management/CIP-013 compliance effort, including people from procurement, legal, cybersecurity and – oh, yes – NERC compliance. The agenda is very flexible; we’ll develop it in a call before the webinar to make sure I’m addressing what you would like to discuss. If you’re interested in this offer, please drop me an email at tom@tomalrich.com, and I’ll send you more information. 



[i] Last year I made a similar offer, and a number of entities took me up on it. The difference now is that the new webinar incorporates what I and my clients have learned as we’ve worked to develop their CIP-013 programs. The most important thing I’ve learned is that the basis of the methodology is the same for all NERC entities. Implementing it at a particular entity is essentially a process of tailoring the methodology for them, rather than re-creating it from scratch each time. This leads to a lot of efficiencies, since later clients have the advantage of building on what I created with earlier clients.

And what if you’ve already seen v1 of this webinar? I’d be glad to present V2!

Friday, September 6, 2019

It’s official: The event reported in March was a real cyber attack

Sept. 8: Blake had been away on Friday but was able to get me the free link to the article this morning. It's easier to read than the copy I appended below. 


E&E News this morning published the story below. It seems that the “cyber event” reported to DoE in March and which became public (again, due to an E&E News story) in April, was truly an attack. The attack exploited a new vulnerability in a firewall; the vendor (previously reported by the E-ISAC as Cisco) had already issued a patch but the entity involved hadn’t applied it yet. The attack caused a control center and three generation facilities to repeatedly lose their connections over a ten-hour period, as their firewalls continually rebooted.

This was revealed in a Lessons Learned document from NERC, which now seems to be unavailable at the link that NERC sent out a couple days ago, and which is repeated in the article. What can I say? If you want to read the document (which is good, and definitely provides some good lessons), email me and I’ll send it to you.

Once you see the NERC document, you’ll know NERC’s Lessons Learned. Now I will tell you (whether or not you want to hear it) Tom’s Lesson Learned, to wit: There was a lot of talk downplaying this event as a random one, but an attack on four different (although linked) networks, which exploited a known vulnerability and which was repeated continually over ten hours, isn’t random. Of course, even if it had been a purely random event, it would nevertheless be a cyberattack, and worthy of analysis. Fortunately, it seems the E-ISAC (which investigated the incident and presumably wrote the Lesson Learned) pursued their investigation until they had all the facts; they are to be commended for this.

However, it seems the downplaying has continued. Reid Wightman of Dragos is quoted as saying “This was probably just an automated bot that was scanning the internet for vulnerable devices, or some script kiddie.” Let’s rephrase what Reid said, to see whether this is really something that’s no big deal: “All it took to cause this event was some script kiddie. Imagine what a skilled hacker could do.”

But of course, we don’t have to imagine that. All we have to do is consider:

  • DHS’ four briefings last July on Russian attacks on the US grid, mostly or all through compromising remote access systems of vendors to power organizations. These were brought to the country’s (and world’s) attention by Rebecca Smith of the Wall Street Journal a little more than a year ago.
  • Rebecca’s (and Rob Barry’s) January article, based on their own research, about how the Russians are compromising power industry vendors with phishing emails, and from there reaching into utilities’ IT (and possibly OT) networks.
  • At the end of that article (and also of my post linked above), there is this important paragraph: “Vikram Thakur, technical director of security response for Symantec Corp., a California-based cybersecurity firm, says his company knows firsthand that at least 60 utilities were targeted, including some outside the U.S., and about two dozen were breached. He says hackers penetrated far enough to reach the industrial-control systems at eight or more utilities.”
  • At the end of January, the directors of the FBI and CIA, as well as the Director of National Intelligence, presented to Congress the ONI’s annual Worldwide Threat Assessment. The NY Times article on this event includes this sentence: “It specifically noted the Russian planting of malware in the United States electricity grid. Russia already has the ability to bring the grid down “for at least a few hours,” the assessment concluded, but is “mapping our critical infrastructure with the long-term goal of being able to cause substantial damage.”
  • In May, E&E News (with the same author as the other articles above, Blake Sobczak) published an article about the former deputy director of the NSA, which includes this quote from him: “Why are the Russians, as we speak, managing 200,000 implants in U.S. critical infrastructure — malware, which has no purpose to be there for any legitimate intelligence reason? Probably as a signal to us to say: We can affect you as much as your sanctions can affect us."

Of course, you would think that, with all of these reports and especially given the repeated statements (and implications) that the Russians have planted malware in grid control networks, there might be a big investigation, right? After all, if some script kiddy attack was worth a months-long investigation – and given that the attacks in the Ukraine, which were far less serious and were of course in the Ukraine not the US, were thoroughly investigated, with reports and results presented in classified and unclassified briefings within a couple months of the attacks – you would think the above reports call for a much more thorough investigation.

You would think so, wouldn’t you?


Here’s the article:
A first-of-its-kind cyberattack on the U.S. grid created blind spots at a grid control center and several small power generation sites in the western United States, according to a document posted yesterday from the North American Electric Reliability Corp.
The unprecedented cyber disruption this spring did not cause any blackouts, and none of the signal outages at the "low-impact" control center lasted for longer than five minutes, NERC said in the "Lesson Learned" document posted to the grid regulator's website.

But the March 5 event was significant enough to spur the victim utility to report it to the Department of Energy, marking the first disruptive "cyber event" on record for the U.S. power grid (Energywire, April 30).

The case offered a stark demonstration of the risks U.S. power utilities face as their critical control networks grow more digitized and interconnected — and more exposed to hackers. "Have as few internet facing devices as possible," NERC urged in its report.

The cyberattack struck at a challenging time for grid operators. Two months prior to the event, then-U.S. Director of National Intelligence Dan Coats warned that Russian hackers were capable of interrupting electricity "for at least a few hours," similar to cyberattacks on Ukrainian utilities in 2015 and 2016 that caused hourslong outages for about a quarter-million people.

The more recent cyberthreat appears to have been simpler and far less dangerous than the hacks in Ukraine. The March 5 attack hit web portals for firewalls in use at the undisclosed utility. The hacker or hackers may not have even realized that the online interface was linked to parts of the power grid in California, Utah and Wyoming.

"So far, I don't see any evidence that this was really targeted," said Reid Wightman, senior vulnerability analyst at industrial cybersecurity firm Dragos Inc. "This was probably just an automated bot that was scanning the internet for vulnerable devices, or some script kiddie," he said, using a term for an unskilled hacker.

Nevertheless, the case turned heads at multiple federal agencies, collectively responsible for keeping the lights on in the face of an onslaught of cyber and physical threats. The blind spots would have left grid operators in the dark for five-minute spans — not enough time to risk power outages but still posing a setback to normal operations.

NERC, DOE, the Federal Energy Regulatory Commission and the Western Electricity Coordinating Council, which monitors and enforces grid security in the western United States, have all declined to share the name of the utility involved in the March 5 incident or other details that they warn could jeopardize the reliability of the grid.

"Lessons learned are an anonymized resource that identifies the lessons and contains sufficient information to understand the issues, and show the desired outcome," NERC spokeswoman Kimberly Mielcarek said in an emailed response to questions, adding that the documents can be based on a "single event" or general trends.

The 'biggest problem'
The latest NERC "lesson" calls on utilities to add additional defenses beyond a firewall, which is designed to block malicious or unwanted web traffic from spilling into power companies' sensitive control networks.

In the March episode, a flaw in the victim utility's firewalls allowed "an unauthenticated attacker" to reboot them over and over again, effectively breaking them. The firewalls served as traffic cops for data flowing between generation sites and the utility's control center, so operators lost contact with those parts of the grid each time the devices winked off and on. The glitches persisted for about 10 hours, according to NERC, and the fact that there were issues at multiple sites "raised suspicion."
After an initial investigation, the utility decided to ask its firewall manufacturer to review what happened, according to NERC, which led to the discovery of "an external entity" — a hacker or hackers — interfering with the devices.

NERC stressed that "there was no impact to generation." Under federal rules, grid operators aren't normally required to report communication outages unless they last for a half-hour or more at a major control center. The fact that hackers, and not some more ordinary source, had caused the temporary blind spots in the incident prompted the victim's DOE filing.

"I'm sure [grid] communications have been disrupted by backhoes in the past," Wightman pointed out. He added that grid operators can pick up the phone and call remote sites to check on operations if normal lines of communication go down.

Wightman said the "biggest problem" was the fact that hackers were able to successfully take advantage of a known flaw in the firewall's interface.

"The advisory even goes on to say that there were public exploits available for the particular bug involved," he said. "Why didn't somebody say, 'Hey, we have these firewalls and they're exposed to the internet — we should be patching?'"

Large power utilities are required to check for and apply fixes to sensitive grid software that could offer an entry point for hackers. NERC declined comment on whether the March 5 incident would lead to any enforcement actions, though the nonprofit has levied multimillion-dollar cybersecurity fines against power companies in the recent past. Late last month, NERC announced it had reached a $2.1 million penalty settlement with an unnamed utility — also based out West — over a spate of cybersecurity violations dating back to 2009. Fines for breaking critical infrastructure protection rules are reported to FERC for final approval.

I want to thank Blake Sobczak for sending me the text of this article, which is normally behind a paywall.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. And Tom continues to offer a free two-hour webinar on CIP-013 to your organization; the content is now substantially updated based on Tom’s nine months of experience working with NERC entities to design and begin to implement their CIP-013 programs. To discuss this, you can email me at the same address.


Monday, September 2, 2019

NERC GridSecCon 2019



I first attended GridSecCon in its third year (2013), and after that I vowed I wouldn’t miss any more. In fact, I can say without hesitation that each year it gets better – and I have no doubt that this year’s (October 22-25 in Atlanta) will be the best yet. It’s the most important security conference/exhibition (and that applies to physical as well as cyber security) for the North American electric power industry, period.

So I was quite happy to be invited to lead a panel devoted to supply chain security at this year’s conference. The panel is entitled “Supply Chain Threat Vector”, and it will take place on Wednesday (Oct. 24) afternoon from 3:15 to 4:00. The description of the panel is concise: “Where risk managers should start in identifying their operational technology supply chain security risk.” Of course, when the panel is constituted and has a phone meeting, I’m sure we’ll flesh out exactly how we’re interpreting that mandate.

The other members of the panel still haven't been finalized; I'll anounce them when I know them. I’ll also announce the objective of our panel, once we've had a chance to meet and discuss it (I have my ideas for that, but I don’t want to state them now and constrain what the panel decides on).

However, I can now give an answer to what some people might naturally ask: Is this panel about supply chain security or CIP-013 compliance? My unequivocal answer to that question is Yes; it is about both security and compliance. Some might then ask: How can this be? After all, every person working in NERC CIP compliance is taught from day one that security doesn’t equal compliance, and compliance doesn’t equal security. Why is CIP-013 different?

CIP-013 is different from the other approved CIP standards, because it doesn’t require the NERC entity to take any specific actions except:

  1. Develop a good supply chain cyber security risk management plan (R1);
  2. Implement that plan (R2); and
  3. Review that plan every 15 months (R3).
That’s it. The plan needs to include the six risks listed in R1.2 (and it’s really eight risks, since R1.2.5 and R1.2.6 both include two risks – kind of a “two for the price of one” deal), but R1.1 makes it clear that the plan needs to address all important supply chain cyber risks, not just those six (although addressing a risk will in most cases mean accepting it). You comply with CIP-013 by developing and implementing a good supply chain cyber security risk management plan, period and end of story. In other words, with CIP-013, compliance equals security and security equals compliance.

This means that the whole question of CIP-013 compliance is what constitutes a good supply chain cyber security risk management plan for important BES Cyber Systems. On this question, the standard itself is silent. The single official guidance document from NERC (developed by the SDT in 2017) simply gives suggestions for what could be included in the plan, so it’s up to the NERC entity to decide what a good plan is. Our GridSecCon panel will aim to provide some suggestions for elements of a good plan. I hope to see you there!*


* I inserted this asterisk to point out an unfortunate circumstance that will require that a lot of people who are involved with NERC CIP compliance (including a substantial number of NERC Regional CIP auditors and enforcement people) be elsewhere the week of GridSecCon. This circumstance is that, for either the third or fourth time, WECC’s semi-annual compliance workshops (including CIP compliance) – called the Reliability and Security Workshop - are scheduled for that week, this time in Las Vegas.

The previous two or three times this happened were the first two or three GridSecCon’s. I know that many entities in WECC complained about this, especially some of their CIP auditors, and WECC finally found a way to keep this from happening. In fact, last year WECC hosted the conference in Las Vegas (and held their workshops the next week in San Diego). I thought the problem had been solved for good.

And now it’s happened again. I don’t know whose fault it is, or what other circumstances may have required that WECC schedule their conference for the same week as GridSecCon, but it’s quite unfortunate that this has happened again, given the number of people that I’m sure would like to attend both events. Once again, these people are forced to choose between the two events – and for someone heavily involved in CIP compliance at a WECC entity, there really is no choice at all.

The WECC CIP workshops host easily 4-500 people. I know not all of them would want, or be able, to attend GridSecCon, but I’m sure that as a result of the scheduling conflict, the conference will be short at least 100 people who could have contributed immensely to the discussions, both during the official sessions and between attendees at other times.

And while I’m on it, I want to lodge another complaint with WECC. This year, the workshops in Las Vegas will cost $650, with no discount if someone wants to attend just the one day devoted to CIP and cyber security. This is far out of proportion (in fact, infinitely so) with what most of the other Regions charge for their CIP workshops, which is $0.00. And at the current rate of escalation (I think the one this spring cost around $450), it won’t be long before they reach $1,000.

Of course, even $1,000 isn’t too much to pay in order to get good compliance information on standards that carry penalties of up to $1 million per day for non-compliance (I admit there is good food at the WECC meetings, but most other Regions provide good food as well, at little or no cost to attendees. When I attended WECC’s spring workshop in spring 2018 in Boise, there was grumbling over the fee then, which I think was in the range of $250-$350. In their survey forms, WECC anticipated this grumbling by asking responders whether they would be willing to give up refreshments during the meeting – not the breakfast and lunch, of course – in exchange for a lower fee. Give me a break!).

But I think WECC could certainly figure out a way to reduce the fee in the future (let alone not escalate it to $1,000). Here’s a suggestion: Since WECC brings a small army of employees to this meeting (which is appropriate, given the size of their Region. For example, I know they have – or have openings for – over ten CIP auditors), they could probably save most of that cost by simply holding all compliance workshops in Salt Lake City from now on. SLC is a wonderful place to visit at any time of the year.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. And Tom continues to offer a free two-hour webinar on CIP-013 to your organization; the content is now substantially updated based on Tom’s nine months of experience working with NERC entities to design and begin to implement their CIP-013 programs. To discuss this, you can email me at the same address.