Sunday, February 17, 2019

Are you going to RSA?



If you’re attending the RSA Security Conference 2019 in San Francisco in 3 weeks, I want to remind you that I’ll be participating in a panel titled “Supply Chain Security for Critical Energy Infrastructure” at 8:00 AM on Wednesday, March 6. The panel I was on at last year’s conference sparked a good discussion (which you can listen to here). Since the panelists are the same (just our moderator is different. This year it’s Sharla Artz of UTC), I think it will again be quite interesting.

I just searched the conference web site for other events on supply chain security and the electric power industry. Our panel is one of the few addressing the power industry, but there are a number of other panels and presentations that touch on supply chain security in one way or another. So you might justify coming to this year’s conference by the fact that you’ll get a leading-edge update on this topic.

BTW, if you’re thinking of coming but are despairing of finding a hotel room, I want to point out that there are still a lot of Airbnb’s available, at prices that don’t seem much different from what they were a couple months ago. Not only will you save a LOT of money, but you just might end up staying in a part of town you would find interesting (there are a lot of those in San Francisco, of course). For example, I got a great Airbnb for last year’s conference that was a couple blocks from Golden Gate Park. I got hooked on going running there every morning – including running through a redwood grove and the wonderful gardens. So this year I’m staying near there again. Can’t wait!


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.

Sunday, February 10, 2019

“Curiouser and curiouser, cried Alice…”


Imagine what might happen if the following news was announced:

“The Ukraine State Intelligence Service stated in its just-released Worldwide Threat Assessment that Moscow is now staging cyberattack assets to allow it to disrupt or damage the Ukraine’s civilian and military infrastructure during a crisis.

“It specifically noted the Russian planting of malware in the Ukraine electricity grid. Russia already has the ability to bring the grid down “for at least a few hours,” the assessment concluded, but is ‘mapping our critical infrastructure with the long-term goal of being able to cause substantial damage.’”

And what if this news came only a few weeks after a Wall Street Journal article quoted the Technical Director of Security Response of Symantec Corp. as saying “…about two dozen Ukrainian utilities were breached. Hackers penetrated far enough to reach the industrial-control systems at eight or more utilities”?

Don’t you think this would cause a big stir? After all, in 2015, when the Russians staged a successful attack on three Ukrainian distribution utilities, causing about a five-hour outage that affected hundreds of thousands of people, the news hit the US power industry like a thunderclap. Top security professionals from the Department of Homeland Security, the NERC E-ISAC, SANS, DoE and other organizations immediately jumped on planes and headed to the Ukraine to investigate this. DHS held briefings in many American cities. Reports were published detailing what had happened down to the minute.

This was considered to be a watershed for the power industry worldwide (the first reported loss of load due to a cyber attack), and – while many industry observers gloated that the Russians would never be able to be so successful in the US, due to much stronger cyber security controls here and also due to the NERC CIP standards! – many others weren’t so sure, and said the Ukraine situation was more a case of “There but for the grace of God go I.”

Yet the 2015 attacks were on just three distribution utilities. Since the attacks described above breached two dozen utilities and penetrated the control systems of eight of those, it’s a very good assumption that malware was planted that could lead to a far more serious outage. Don’t you think there would be a much bigger response to these new reports? More specifically, don’t you think there would be another big investigation, for two reasons? First, out of simple goodwill toward the Ukrainian people, since they face a huge and ruthless foe? And second, out of concern that whatever attacks the Russians are conducting in the Ukraine are tests for attacks they could use on power grids worldwide?

At this point, you’re supposed to say “I would certainly think so!” And I agree with you 100%.

Well, the quotes above were actually published, the first in the Times and the second in the Journal. But there were a couple small differences between what I’ve quoted above and the actual quotes. One is that the country in question was the US, not the Ukraine. The other is that the agencies that wrote the 2019 Worldwide Threat Assessment were the FBI and CIA. I wrote about the NYT article in this post and the WSJ article in this one.

Yet where is the outrage? Where are the frenzied press releases and briefings? And where are all of the investigators rushing to find out what happened? Does anyone know where they are? I hope we don’t have to put them on milk cartons.

Let’s be clear. The Times quoted the 2019 Worldwide Threat Assessment put out by the FBI and CIA as saying

  • Moscow is now staging “cyberattack assets” (which presumably include malware) to allow it to disrupt or damage our civilian and military infrastructure during a crisis.
  • Malware has been implanted in the US grid that could be used today to cause outages.
  • Perhaps most ominously, Russia is mapping our critical infrastructure with the long-term goal of being able to cause substantial damage.

At the same time, Symantec, who has collaborated with DHS in investigating the Russian attacks in the US, is saying very specifically that at least eight US utilities have been penetrated at the control system level, meaning malware is almost certainly planted in all of them. Hopefully the eight utilities don’t include Southern Cal Edison, PG&E, ConEd, Commonwealth Edison, CenterPoint and other utilities serving major metropolitan areas. But even if they’re all small distribution-only coops in the middle of North Dakota, eight US utility control networks penetrated is still eight more than are known to have been penetrated previously. And as we know, utility control centers are by their very nature connected to other utility control centers as well as to Regional Transmission Organizations like PJM. The infection might very well spread.

Here’s another quotation from the January WSJ article: “In briefings to utilities last summer, Jonathan Homer, industrial-control systems cybersecurity chief for (the Department of) Homeland Security, said the Russians had penetrated the control-system area of utilities through poorly protected jump boxes. The attackers had ‘legitimate access, the same as a technician,’ he said in one briefing, and were positioned to take actions that could have temporarily knocked out power.” Again, Mr. Homer wasn’t saying that outages were caused, but the fact that the Russians were “positioned” to do that almost certainly means they’ve planted malware in control systems operated by at least two utilities (since he used the plural).

Of course, none of these reports should just be taken at face value. Some of the people quoted may not have fully understood what they were saying; e.g. they may have meant “small generating plants” when they said “utilities”, etc. And I don’t know what kind of power expertise the FBI and CIA have, but it’s possible they may be misinterpreting data they’ve received. So there’s reason to be skeptical of these reports.

But here’s an idea: If we’re skeptical of these reports, why don’t we…you know…investigate them to determine whether they’re accurate or mistaken? Yet I’ve heard literally nothing about any investigation. Nor have I heard the slightest bit of outrage expressed – by the Federal government, the power industry, you name it – that the Russians are taking such deliberate steps to potentially cripple the US economy and our military capabilities. And DHS has amply documented that they are taking those steps, whether or not they’ve actually penetrated control networks. They’re trying really hard.

This lack of a response is more than passing strange. I would very much like to see one (or more) of the following organizations investigate this (they’re not in any particular order):

  1. The NERC E-ISAC
  2. FERC
  3. Idaho National Lab
  4. SANS
  5. DoE’s Office of Cybersecurity, Energy Security, and Emergency Response (CESER)
  6. Dragos, Inc. (who did a great job of investigating the malware used in the second Ukraine attacks, and due to that and other smart moves has become almost an ICS security institution, much to their credit)
  7. Hercule Poirot
  8. James Bond
  9. Judge Judy
  10. Sam Spade

In other words, I would like to see somebody get to the bottom of this and let us know what happened. And of course, if it turns out that malware has actually been implanted, wouldn’t it be kind of a good idea to…you know…let utilities know about it – so their cyber staff might just mosey over to their control systems, to see if the malware might be sitting there, too? Why would they want to do this, you ask? Well, curiosity for one reason – it would certain be interesting to know if your employer was a member of the first group of US utilities ever to be breached at the control system level. But also - and this might sound silly to you - it did occur to me that utilities might actually want to remove malware that’s implanted in their control networks. But they would need to know what to look for, since it’s not likely the Russians named the files Malware1, Malware2, etc. This is of course the main reason why we need an investigation, and I find it literally incomprehensible that one wasn’t launched at least after the Worldwide Threat Assessment in January.

As I pointed out in my previous post on this, there really are two investigations in question now. The immediate one is the one I just described – this is a technical investigation by experts. The second investigation would probably be a criminal one. It is only needed if it turns out the reports of Russian penetration of utility control centers are true, and it turns out that somebody deliberately tried to suppress them last summer, when Jonathan Homer of DHS first made them and people at DHS soon put out at least three mutually contradictory stories that minimized what the Russians had achieved. I certainly hope this second investigation isn’t needed – but again, unless we do the first investigation, we’ll never know if the second one is needed, will we?

Curiouser and curiouser, indeed!


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com

Sunday, February 3, 2019

My quote in the WSJ



Yesterday’s online Wall Street Journal carried a very good article by Rebecca Smith, titled “Duke Energy Broke Rules Designed to Keep Electric Grid Safe”, on what I can finally call the Duke Energy penalty of $10MM for NERC CIP violations. The last paragraph (which was cut out of the print edition) reads:

“The state of compliance is pretty rotten,” said Tom Alrich, a utility consultant who helps utilities audit their security programs. He added that he knows Duke spends a lot of money on its critical infrastructure protections. “I really doubt they are much more insecure than anyone else,” he said.

While I said those words, I do want to point out that they were part of a much longer discussion with Rebecca on Friday, which I believe she will expand on in a future article. Because I think my comments above are likely to be misunderstood without context, here’s a summary of the argument I was making (although I’ll admit that it was only partway into our discussion that I began to understand what I now believe to be the biggest issue in the Duke penalty. The neatness of my exposition below – if it is neat - is only with the benefit of hindsight!).

  • I think the cyber security of the North American Bulk Electric System is very good, and that is in large part due to the CIP standards; I have had this opinion for years.
  • But there is a big problem with CIP (actually, there are four or five, but this is the biggest): The requirements don’t follow a risk-based approach, in which the entity first identifies the most serious cyber security risks they face, as well as the most important systems that needed to be protected. This would allow NERC entities to allocate their scarce cyber security resources toward mitigation of the biggest risks, and to the systems that most need to be protected.
  • The CIP-013 and (to some extent) CIP-014 standards follow this latter approach, as do individual requirements like CIP-010-2 R4, CIP-011-2 R1 and CIP-007-6 R3. On the other hand, there are very prescriptive requirements (the two worst offenders being CIP-007 R2 and CIP-010 R1) that require an inordinate amount of resources to comply with. And even with a lot of resources available, it is literally inevitable that a large organization like Duke will regularly suffer multiple failures on these requirements. I know all of them do.
  • Of course, Duke evidently had more than their share of failures; this was without a doubt in part due to the number of acquisitions they have made of late, which almost always leads to compliance problems. And NERC rightly points to management failures as the main cause of the problems, which probably explains the size of the fine. As we know, sometimes it takes a shocking event to get management’s attention.
  • NERC also points out (in the second paragraph on page 12 of part 1. Due to the high security implemented on the PDF, I can’t copy any text from the document) that the risk to the BES in Duke’s 127 CIP violations is much more due to their sheer quantity, and the fact that many of them were of long duration and were repeated, than to the number that were “serious” (13 were serious, while the rest were either “minimal” or “moderate” risk).
  • As further evidence, “Jason R” (whom I know well, but I’m withholding his name at his request – not an unusual request in the utility industry!) pointed out in a comment posted yesterday on my last post that a quick scan of the requirements that Duke violated didn’t indicate any violations of requirements that apply only to High impact assets. Since High impact Control Centers are the crown jewels of the BES, this implies that the violations were concentrated in Medium-impact substations or generating stations. An attack on any one of these (or even a number of them) would be much less serious than an attack on a single High Control Center. This supports NERC’s statement in the previous bullet point.[i]

But don’t get me wrong. I’m certainly not saying that Duke is being unjustly punished for minor violations that everybody else does all the time. They are definitely a very serious violator of NERC CIP, and their penalty reflects that fact.

But here’s the rub: What does this mean for security? Was (or is) Duke a ripe plum just waiting for a halfway-competent Russian-sponsored hacker, or maybe even a script kiddie in Finland, to take over and hold maybe five or ten percent of the Eastern Interconnect hostage (perhaps until the US recognized the Donbass area of the Ukraine as an independent country)? Or would destructive software like Shamoon or NotPetya have trashed their control centers and darkened major cities for days? No, I don’t think so. This was a major compliance miss by Duke, but I’m sure (and NERC says as much, multiple times) that the chance of any cascading outage (or even just a local outage) was always very remote.

So what will be the effect of Duke’s penalty? Obviously, they’re going to have to (and already are, I’m sure) devote an even-bigger amount of resources toward CIP compliance than they already are. They especially need to beef up their compliance management and oversight processes, which NERC says are the big problem. This will definitely make Duke much more CIP-compliant, without a doubt.

Now, let’s think about the overall security of the grid (or at least Duke’s portion of it); will that be enhanced to the same degree as Duke’s compliance posture? Not a chance, and here’s why. Any NERC CIP compliance professional will tell you that a large percentage of the money and time that their organization spends on CIP compliance doesn’t go to enhancing the security of the grid, but to required activities that require large amounts of time, which are way out of proportion to whatever security benefit they provide.

My poster child for this is CIP-007 R2.2, which requires that the entity check every 35 days with every vendor of any software or firmware installed on any device within a Medium or High impact Electronic Security Perimeter to see if a new security patch has been released since the last month – and this regardless of whether the system is extremely critical or just mildly important, or whether or not the vendor has released a single security patch in the last twenty years. And the entity has to document – again, for every version of every piece of software or firmware installed on any device in a Medium or High impact ESP - that all of this was done every 35 days, since a NERC rule is that if you didn’t document it, you didn’t do it. For a large entity like Duke, this probably amounts to hundreds or even thousands of software or firmware packages (and each different version thereof, of course) that need to be checked on every month, along with documentation that each one was checked on.

I’ve asked a number of CIP people to estimate what percentage of every dollar they spend on NERC CIP compliance actually goes to cyber security; their estimates have ranged from 25% to 70%. Think about it. They’re saying that the absolutely best case is that 70% of spending on CIP compliance actually goes to enhance BES security, and the worst case is that only 25% does! I honestly think the median is probably around fifty percent.

So it’s clear: Duke will spend a boatload of time and money on enhancing their compliance posture, and while this will all go to improve compliance, probably only about half of it will go to enhancing their security posture. Moreover, even if 100% went to security, that still wouldn’t make them secure, since there are so many cyber security threats (and they’re unfortunately growing all the time) that aren’t addressed in CIP at all. Four big ones I can think of without breaking a sweat:

  1. Phishing, the ultimate cause of the Ukraine attacks, and the main focus of the current Russian attacks on the US grid. It is very hard to even think of a major cyber attack in the last few years that didn’t start with phishing.
  2. Ransomware, which has already forced a couple of US utilities to pay ransom, but which fortunately hasn’t yet penetrated control networks. In this category I also include NotPetya, at $10 billion in damages by far the most destructive cyber attack ever (of course, courtesy of our good friend Mr. Putin).
  3. Machine-to-machine access to BES Cyber Systems. This is specifically exempted from control by CIP now. This threat will be mitigated to some degree when CIP-013 comes into effect on July 1, 2020. Meanwhile, it’s what worries me most about the Russian attacks on vendors – that they could take over devices operated by the vendors, that communicate directly with BCS. It’s very hard to see how an entity could cut off a well-planned MtM attack before it could cause serious damage.
  4. Vulnerabilities in custom software, running on BCS, that was developed by or for the NERC entity. Since there aren’t typically regular patches made available for custom software, it isn’t covered at all by CIP-007 R2 - and this threat also won’t be addressed in CIP-013. When government supply chain security professionals heard, in a presentation by Howard Gugel of NERC at the Software and Supply Chain Assurance conference in McLean, VA last September, that vulnerabilities in custom software aren’t addressed in any way in any of the CIP standards including CIP-013, they were incredulous.

Of course, I’m sure utilities are spending money and time mitigating these threats (and many others not addressed at all in CIP), because they know it’s important. But here’s the root of the problem: Money doesn’t grow on trees, even for big utilities like Duke. Ultimately, if Duke has to spend a lot more money on NERC CIP compliance than they have been, at least some of it will come from mitigation of the cyber risks that aren’t addressed at all in CIP. And to the extent that these risks are more important than some of those addressed by CIP (e.g. does the risk of not identifying a new security patch within a month, for a few low-risk pieces of software, justify the enormous amount of resources currently being spent by NERC entities on compliance with CIP-007 R2.2? Especially when that same amount of resources might go to, say, additional anti-phishing training, or code review of custom software to find vulnerabilities?), this will probably – dare I say it? – result in an actual weakening of Duke’s cyber security posture, not strengthening it. And there’s nothing I just said that doesn’t apply to every other NERC entity with Medium or High impact BES assets, even those that have so far been fortunate enough to avoid $10MM fines.

I’ll admit that what I’ve just written goes far beyond what Rebecca and I discussed on Friday! Hopefully she’ll be able to explore this topic further in a future article. And whether or not she does, I will certainly do that in future posts (as I already have in a number of previous posts, attacking different sides of the problem). You have been warned.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.

[i] The WSJ article specifically points to violations of CIP-005 R2 as serious (i.e. it seems some Medium or High impact BES Cyber Systems weren’t protected by two-factor authentication and/or encryption of Interactive Remote Access). If these were widespread (the document is so highly redacted that it’s impossible to tell whether there were a few systems unprotected or a thousand), I would agree this was a serious vulnerability (hopefully it’s corrected now). This is especially a concern in light of the ongoing Russian cyber attacks, that are in part trying to use IRA as a means of accessing Electronic Security Perimeters.

Wednesday, January 30, 2019

A new record!



I think a lot of my readers will already know this, but if you don’t – NERC just announced the largest-ever CIP fine, which adds another decimal place to the previous largest fine: $10 million even (in fact, I imagine this figure, being the smallest possible eight-digit amount, was deliberately chosen for its ability to strike terror into the hearts of utility compliance folks nationwide). It’s all outlined in a voluminous four-part Notice of Penalty totaling over 700 pages. I’ve only seen the first part, available here, and that alone is 250 pages! Naturally, I’ve only skimmed through it, and I’m not sure when I’ll read the whole part 1, let alone all four parts.

Of course, the name of the entity (or really entities. In fact, the organization is always referred to as “The Companies”) isn’t provided. Beyond that, NERC has redacted all information that might refer to a particular NERC Region (although it’s clear there were at least two or three Regions involved); NERC clearly believes it would constitute a big threat to the BES to provide any information that might lead to identification of the entity.

However, I’m much more interested in what the violations were, and what overall lessons can be learned by other utilities. There are 127 violations, covering all currently-enforced CIP standards including CIP-014. The details of those violations are up to you to read, but I call your attention to pages 10-13, which discuss a) Facts common to the violations (i.e. common causes); b) Risks common to the violations; and c) Mitigations common to the violations.

Since the PDF is high security, I can’t copy any text to paste it here, but I’ll summarize. First, the common causes they point to are:
  • Lack of management engagement and support for the CIP program;
  • Program deficiencies, including deficient documents, training, and implementation;
  • Lack of communication between management levels in the company; and
  • Lack of communication between business units on who is responsible for which tasks.

The entity committed to:
  • Increasing senior leadership and oversight;
  • Centralized CIP oversight department;
  • Conducting industry surveys and benchmarking regarding best compliance practices (I admit I have a hard time understanding this one. I have never yet seen any sort of comprehensive industry survey of compliance practices – mainly because for a utility to provide that information, it will almost always require providing BES Cyber System Information at the same time);
  • Continuing to develop an in-house CIP program and talent development program;
  • Investing in enterprise-wide tools (configuration management, etc.);
  • Adding security and compliance resources;
  • Instituting annual compliance drills (that’s an interesting idea; I hadn’t heard of that before); and
  • Creating three levels of security and compliance training.

These are the common mitigation actions the entity committed to:
  • Revising their corporate IT compliance program so that it meets the requirements of all stakeholders;
  • Requiring each business units to revise their procedures and controls so that they follow the corporate IT program;
  • Each business unit will document and track its controls for CIP compliance; and
  • Documenting how each non-compliance listed in the settlement agreement was mitigated, and how this will prevent recurrence of the violation (of course, that document will be about three times the length of the NOP. There’ll be a whole lotta writin’ going on!).

I must say that I have yet to hear of any utility that couldn’t also benefit from at least a few of these same practices. Go thou and do likewise.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.

Tuesday, January 29, 2019

We need an investigation!



This is a post I’ve been intending to write ever since I wrote this post a few weeks ago, about the Wall Street Journal’s most recent article on the Russian cyber attacks on the US power grid. I thought I would take my time (and I don’t have a lot of free time lately, due to my day job) to write it, since there were still questions in my mind about the position I wanted to take. I wanted to make sure I provided enough supporting evidence for my position.

However, there was a development today that provided all the supporting evidence I could possibly need. Specifically, this was a report in the New York Times about the testimony before the Senate Intelligence Committee (and don’t tell me that name is an oxymoron!) by Gina Haspel, the CIA director, Christopher Wray, FBI director, and Dan Coats, the director of national intelligence. They were discussing the 2019 “Worldwide Threat Assessment”, which was released today. Of course, the testimony covered a lot of different topics, but what struck me were these two paragraphs from the Times article:

The assessment also argues that while Russia’s ability to conduct cyberespionage and influence campaigns is similar to the one it ran in the 2016 American presidential election, the bigger concern is that “Moscow is now staging cyberattack assets to allow it to disrupt or damage U.S. civilian and military infrastructure during a crisis.”

It specifically noted the Russian planting of malware in the United States electricity grid. Russia already has the ability to bring the grid down “for at least a few hours,” the assessment concluded, but is “mapping our critical infrastructure with the long-term goal of being able to cause substantial damage.”

So why is this so important? You’ve heard it before, right? Specifically, you may have noted, in the above-linked post on the recent WSJ article, that I quoted this paragraph from that article:

In briefings to utilities last summer, Jonathan Homer, industrial-control systems cybersecurity chief for Homeland Security, said the Russians had penetrated the control-system area of utilities through poorly protected jump boxes. The attackers had “legitimate access, the same as a technician,” he said in one briefing, and were positioned to take actions that could have temporarily knocked out power.

The quote from Jonathan Homer first appeared in the July WSJ article by Rebecca Smith, one of the two reporters who wrote the recent article. Of course, the July article set off a firestorm of amplifications by many other news outlets, and a chain of events that I wrote about in ten posts last summer, starting with this one.

Here is as brief a summary of previous events as I can make, while still providing the important facts:

  1. DHS (specifically the NCCIC, which incorporates what was the ICS-CERT. And if you think this is TMA – too many acronyms – I couldn’t agree with you more!) announced a series of four briefings to update on the Russian cyber attacks against the US electric power industry, which they had first announced last March. Even though the March report said only generation was the target, and the Russian’s hadn’t penetrated any control systems at the plants[i], the first briefing on July 23 painted a very different picture, which was vividly described in the first WSJ article. It seemed very clear from what was said (as quoted in the article – I didn’t attend that first briefing), that the Russians had penetrated control centers (definitely plural) of US utilities, where they had most likely planted malware; and that malware might well be used at some point to cause a major grid disturbance.
  2. I was skeptical that actual control centers of power transmission or distribution utilities had been penetrated, and I said in my post the day after the WSJ article appeared (linked two paragraphs above) that what the presenters must have meant was that control rooms of generating plants were penetrated. This can’t produce a major grid outage, but having a bunch of plants go down at one time would certainly be annoying; given the alarmist tone of the first briefing, I assumed there must have been a number of substantial plants penetrated (at the control system level, of course) – I guessed up to 25. But my biggest reason for skepticism about the WSJ article was that, if it were really true that a bunch of utility control centers were penetrated, there would have been alarm bells ringing at the highest level of government, and utilities would pretty much have been told to drop everything and look for malware on their control systems, as well as take further steps to beef up their already-strong defenses. Given that that those bells never rang, I found it very hard to believe the statements quoted in the article. I assumed the statements in that first briefing were the product of a few DHS people getting overly excited, and thinking that exaggerating the seriousness of the situation would make utilities pay a lot more attention to cyber security (and it would be hard to see how they could pay much more attention than they already are!).
  3. However, the day after that post – July 26 – it was reported that a DHS spokesperson announced that, not only were no utility control centers penetrated, but the only control systems penetrated were those in a small generating plant that couldn’t have any significant grid impact. This I found very surprising, to say the least. Yea, greatly was I wroth, and I rent my garments in frustration. But I continued to attribute the tone of the July 23 briefing to over-zealousness on the part of the NCCIC staff members who led it.
  4. I continued in that belief even though a friend pointed out to me the next day that the slides from the July 23 briefing directly contradicted the later statement that only one small plant was penetrated. And I continued to continue in that belief when Rebecca Smith wrote a new article that seemed to still follow the narrative from the first briefing, and didn’t mention the DHS walkback at all. I expressed amazement that she wouldn’t have changed the tone of her articles, and attributed this to her being either na├»ve or having lived in an inaccessible cave for the past few days (I now greatly regret the tone of my remarks about Rebecca, and want to apologize to her. It seems I may have been the one living in a cave, not her. Continue reading, to see what I mean).
  5. Not being satisfied with just putting out three different stories of what the Russians had achieved, DHS put out another story – which contradicted the other three – at a July 31 briefing for top utility executives in New York, which the Secretaries of DHS and DoE both participated in. This time, the story was that only two wind turbines had been penetrated. I later castigated DHS for being so confused in their stories, and in particular for not stepping forward to point out what seemed to be the errors in the WSJ story, and the flurry of news reports based on it. But I continued to believe there was no way the original DHS briefing could be true.
  6. And I’m proud to report that I witnessed firsthand the promulgation of yet another DHS story, trying to walk back the original briefing story. This one came at the Software and Supply Chain Assurance Forum in McLean, VA in late September. There, a fairly low-level NCCIC employee – although the head of NCCIC had already addressed the same meeting, and may have been still in the room – stated that the confusion was that, in the first briefing, the speakers didn’t understand the difference between vendors and utilities. Therefore, when they were saying that utilities were penetrated, they really meant vendors. Since there’s no dispute that vendors were penetrated (and the latest WSJ article describes how in vivid detail), the speaker implied (although he didn’t state it) that this is why the original briefing was so different from the true story – which would presumably be one of the three DHS walkbacks already described. I found this statement amazing, especially because the speaker was able to keep a straight face when he said it. I couldn’t have done that.

So now we’re back at the recent WSJ article, from which I also quoted this paragraph:

Vikram Thakur, technical director of security response for Symantec Corp., a California-based cybersecurity firm, says his company knows firsthand that at least 60 utilities were targeted, including some outside the U.S., and about two dozen were breached. He says hackers penetrated far enough to reach the industrial-control systems at eight or more utilities. He declined to name them.

This completely turns things around, in my opinion. After all, “eight or more utilities” isn’t two wind farms or one small CT plant, period. So either Mr. Thakur isn’t telling the truth, or both he and the speakers at the original DHS briefing (especially Jonathan Homer) are the ones telling the truth. If so, this means that the four later attempts by DHS to walk back this story are themselves based on “alternative facts”.

However, as I mentioned above, I was still hesitant to write something about this until I was sure I had all the facts straight about who said what when -  that is, until I read the NY Times article a couple of hours ago. Now it seems the national intelligence community is firmly on the side of Mr. Thakur and Jonathan Homer. Even then, I find it very hard to conclude that they’re right, simply because there hasn’t been any huge hue and cry over this penetration of our grid. I think that would truly constitute a national emergency (in contrast to the “national emergency” currently being discussed). You remember all the frenzy that (rightly) surrounded the announcement of the first Ukraine attack in 2015? This would be literally ten times as great, and it should be.

So I think there need to be two investigations. The subject of the first, and by far the more urgent one, is whether it’s really true that malware has been implanted in utility control centers by the Russians. Of course, if that’s the case, there needs to be a major effort to remove it, and to hold Russia accountable (in fact the relatively weak response so far to the undisputed fact that they have been trying so hard to penetrate the US grid – whether or not they’ve succeeded – is something I also don’t understand. Or maybe I do understand it, which is even scarier). And there’s probably a lot more that needs to be done, including perhaps with the CIP standards.

The second investigation isn’t as urgent, but in my mind it’s even more serious: How did it happen that DHS was quickly falling all over itself to walk back what was said in the first briefing last July, if in fact that briefing was largely correct – and the Russians had penetrated utility control centers? That is something for the Department of Justice, since it’s definitely a criminal investigation - one involving national security.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.


[i] Although I just noticed a quote where it seems someone from DHS did imply in March that utility control centers were penetrated and malware had probably been implanted. I must have missed that part, as I assume the rest of the industry did as well - since I don't remember any big hue and cry then, either.

Thursday, January 24, 2019

What is the purpose of CIP-013?



Lew Folkerth of RF published an article about CIP-013 in December in the RF newsletter, which I wrote about in this post and in this one. In that article, Lew said that the supply chain cyber security risk management plan required by CIP-013 R1.1 needs to demonstrate that it achieves the objective(s) of the standard. And what are they? In his article, Lew repeated the four objectives that FERC had outlined, both in their Order 829 of June 2016 that required NERC to develop a supply chain security standard and in Order 850 of last October, which approved CIP-013. These objectives are

1.                   Software integrity and authenticity;
2.                   Vendor remote access protections;
3.                   Information system planning; and
4.                   Vendor risk management and procurement controls.

However, being very bright (and to prove that’s true, my mother always said I was bright!) and an astute reader, I pointed out that there’s an even simpler statement of CIP-013’s purpose, in Section 3 near the beginning of the standard: “To mitigate cyber security risks to the reliable operation of the Bulk Electric System (BES) by implementing security controls for supply chain risk management of BES Cyber Systems.” I pointed out that all of FERC’s four items are included in this statement, so I thought this should really be the objective that entities must achieve in their plan(s).

But, after having done some pretty intensive reading of various documents having to do with CIP-013 and supply chain security, I came to realize that FERC’s statement is pretty good after all, and has the advantage of at least providing some substance to the meaning of the words “cyber security risks” in the Purpose statement. In other words, the Purpose statement is pretty broad, and doesn’t provide a lot of guidance to the entity in developing the plan, or to the auditor in auditing it. With FERC’s four things, the auditor has at least something to go on in the audit, while at the same time the entity has a (very) broad outline of what its plan needs to address. So I am now fine with Lew’s statement that FERC’s four objectives constitute the purpose of CIP-013.

Of course, these four things are far from being a roadmap to compliance with CIP-013! Lew’s article does give some clues to that roadmap as well, which I elaborated on in the two posts already linked. I’ll continue to elaborate on the roadmap in the next post in that series. But I do want to point out now that these four items don’t have equal standing, in my opinion. The last two constitute the two broad areas of risk that must be addressed in the supply chain risk management plan, while the first two are simply two of the individual risks that are included under the third objective. So FERC’s four objectives could be summarized by just listing the last two.

This all means that your CIP-013 R1.1 supply chain cyber security risk management plan must address risks of “information system[i] planning” and “vendor risk management and procurement controls”. And you need to show the auditors that your plan addresses both types of risk.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.

[i] It’s unfortunate that FERC used the term “information system”, when they should really have said “control system” (although I initially thought there might be some significance to the fact that they did, as I discussed in this post after FERC issued Order 829 in 2016). Of course, NERC CIP doesn’t deal at all with information systems, whose purpose is to store and process information. The power grid, and other critical infrastructures, is controlled by control systems. These are what CIP protects.

Thursday, January 17, 2019

Lew on CIP 13, part 2 of (integer < 10)



Note: I expect to have the second post in my new series on the Russian cyber attacks up early next week. 

This post is the second of a set of posts on an excellent article by Lew Folkerth of RF (the NERC Region formerly known as ReliabilityFirst) on CIP-013, the new NERC supply chain security standard; the first post is here. That post dealt primarily with how Lew characterizes the standard and started to discuss what he says about how to comply; this one continues the discussion about how to comply with the standard. The next post will continue the compliance discussion and also discuss how CIP-013 will be audited, although it may not be the last in this series (sorry to disappoint you!).

I also want to point out that what I am saying in this series of posts goes beyond what Lew said in his article, for two reasons:

  1. Lew doesn’t have much space for his articles, as I do for my posts. So where he has to use ten words, I can write five paragraphs. And I have no problem with doing that, as any long-time reader will attest.
  2. While I firmly believe everything I say in this series of posts is directly implied by what he said in his article, it’s natural that I would be able to discuss these topics in more detail, because I’ve had to figure out a lot of the details already – since I’m currently working with clients on preparing for CIP-013 compliance. Of course, what I write in these posts is by necessity very high level; there are details and there are DETAILS. These posts provide the former (see the italicized wording at the end of this post to find out how to learn about the latter).

What risks are in scope?
As I have pointed out in several posts in the past, and also pointed out in part 1 of the first post in this series (in Section A under the heading “Lew’s CIP-013 compliance methodology, unpacked for the first time!”), CIP-013 R1.1 requires the entity to assess all supply chain risks to BES Cyber Systems, but it doesn’t give you any sort of list (even a high-level one) of those risks. So R1.1 assumes that each entity will be quite well-read in the literature on supply chain security risks and will always be diligently searching for new risks; then they’ll put together a list of all of these risks and assess each one for inclusion (or not) in their plan.

This might be a good idea if every NERC entity with Medium or High impact BCS had security staff members who could devote a good part of every day to learning about supply chain security risks, so that they could always produce a list of the most important risks whenever required. While this might be true for some of the larger organizations, I know it’s not true for smaller ones. What are those people to do?

I’ve repeatedly expressed the hope that an industry organization like NATF or the NERC CIPC would put together this list of supply chain risks, although I’ve seen no sign of that happening yet. Another idea would be if the trade associations, including APPA, EEI, NRECA and EPSA, each put together a comprehensive list for their own members. While APPA and NRECA developed a good general discussion of supply chain security for the members of both organizations, it doesn’t contain such a list; I hope they will decide to do that in the future as well.

In the meantime, NERC entities subject to CIP-013 need to figure out on their own what their significant supply chain security risks are. Where can you go for ideas? Well, there are lots of documents and lots of ideas – and that’s the problem; there are far too many. There’s NIST 800-161 and parts of NIST 800-53, for starters. There’s the NERC/EPRI “Supply Chain Risk Assessment” document, which was issued in preliminary form in September and will be finalized in February; there’s the excellent (although too short!) document that Utilities Technology Council (UTC) put out in 2015 called “Cyber Supply Chain Risk Management for Utilities”; and there’s the APPA/NRECA paper I just mentioned. There are others as well. None of these, except for 800-161, can be considered a definitive list, though. And 800-161 is comprehensive to a fault; if any utility were to seriously try to address every risk found in that document, they would probably have to stop distributing electric power and assign the entire staff to implementing 800-161 compliance!

One drawback of all of these documents, from a CIP-013 compliance perspective, is that they don’t identify risks directly. Instead, they all describe various mitigations you can use to address those risks. This means that you need to reword these mitigations to articulate the risks behind them. To take the UTC document as an example, one of the mitigations listed is “Establish how you will want to monitor supplier adherence to requirements”. In other words, while it’s all well and good to require vendors (through contract language or other forms of commitment like a letter) to take certain steps, you need to have in place a program to regularly monitor that they’re taking those steps.

We need to ask “What is the risk for which this is a mitigation?” The answer would be something like “The risk that a vendor will not adhere to its commitments”. This is one of the risks you may want to add to your list of risks that need to be considered in your CIP-013 supply chain cyber security risk management plan. You can get a lot more by going through the documents I just listed.

So – in the absence of a list being included in Requirement CIP-013 R1.1 itself, and in the absence of any comprehensive, industry-tailored list put out by an industry group - this is one way to list the risks you need to assess in your CIP-013 supply chain cyber security risk management plan. The main point of this effort is that you need to develop a list that comes as close to covering (at least at a high level) all of the main areas of supply chain cyber risk as possible.

But I know there’s a question hidden in every NERC CIP compliance person’s heart when I bring this point up: If I develop a comprehensive list of risks, am I going to be required by the auditor to address every one of them? In other words, if my list includes Risk X, but I decide this risk isn’t as important as the others so I won’t invest scarce funds in mitigating it, am I going to receive an NPV for not mitigating it?

And here’s where Uncle Lew comes to the rescue. He points out “You are not expected to address all areas of supply chain cyber security. You have the freedom, and the responsibility, to address those areas that pose the greatest risk to your organization and to your high and medium impact BES Cyber Systems.” There are two ways you can do this.

The first way is that you don’t even list risks in the first place that you believe are very small in your environment – e.g. the risk that a shipment of BCS hardware will be intercepted and then compromised during a hurricane emergency is very low for a utility in Wyoming, while it might be at least worth considering for a utility in South Carolina. The former utility would be justified in leaving it off its list altogether, and doesn’t need to document why it did that. Any risk that has almost zero probability doesn’t need to be considered at all – there are certainly a lot more that have much greater than zero probability!

The second way in which you can – quite legally – prune your list of risks to a manageable level is through the risk assessment process itself. R1.1 requires that you “assess” each risk. What does that mean? It means that you assign it a risk level. In my book, this means you first determine a) the likelihood that the risk will be realized, and b) its impact if it is realized. Then you combine those two measures into what I call a risk score.

Once you’ve assessed all your risks, you rank them by risk score. And guess what? You now need to mitigate the highest risks on the list. You can also mitigate some risks below these (perhaps mitigate them to a lesser degree), but in any case there will be some level on your risk list below which you won’t even bother to mitigate the risks at all, since they have lower risk scores than all of the risks above them (although you will still need to document why you didn’t mitigate those risks, by briefly explaining why the risk score is so low for each of them).

Will you get into trouble for not mitigating the risks at the bottom? No. As Lew said, you need to “address those areas that pose the greatest risk to your organization and to your high and medium impact BES Cyber Systems.” The direct implication of these words is that you don’t need to address the risk areas that pose the least risk.

Why are you justified in not mitigating all of the risks listed in your supply chain cyber security risk management plan? Because no organization on this planet (or any other planet I know of) has an unlimited budget for cyber security. Everyone has limited funds, and the important thing is that you need to allocate them using a process that will mitigate the most risk possible. That process is the one I just described (at a very high level, of course).

You may notice that this is very different from the process to mitigate risk implicit in all of the other NERC standards, as well as the majority of requirements for the CIP standards. That process – a prescriptive one – tells you exactly what needs to be done to mitigate a particular risk, period. You either do that or you get your head cut off.

For example, in CIP-007 R2, you need to, every 35 days, contact the vendor (or other patch source) of every piece of software or firmware installed on every Cyber Asset within your Electronic Security Perimeter(s), to determine a) whether there is a new patch available for that software, and b) whether it is applicable to your systems. Then, 35 days later, you need to either install the patch or develop a mitigation plan for the vulnerability(ies) addressed by the patch. It doesn’t matter if a particular system isn’t routably connected to any others, or if the vendor of a particular software package has never issued a security patch in 20 years. For example, you still need to do this every month; you can’t have two schedules, say monthly for the most critical systems and those routably connected to them and quarterly for all other systems. Needless to say, if CIP-007 R2 were a risk-based requirement like CIP-013 R1.1 (or CIP-010 R4 or CIP-003 R2, for that matter), you would have lots of options for mitigation, not just one.

As an aside, I do want to point out here that in CIP you never have complete freedom to choose how you will mitigate a particular risk, even when the requirement permits consideration of risk, for two reasons:

1.       The mitigation always has to be effective, as Lew pointed out a couple years ago; and
2.       If you’re using a mitigation different from the one normally used – e.g. you’re not using patch management to mitigate the threat of unpatched software vulnerabilities, or you’re not using antivirus or whitelisting software to mitigate the threat of malware – you can rightfully be asked to justify why you took an alternative approach.

A final question you might ask about identifying risks for R1.1 is “Where do I draw the line? You said that I can draw a line through the ranked set of risks, so that all risks below that line don’t need to be mitigated at all. Of course, I would have to draw the line when I had already committed all of the funds I have budgeted for CIP-013 compliance (although I will obviously be willing to spend as much as I have budgeted for that purpose).

“But let’s suppose I don’t have a lot of funds available, and I have to draw the line after three items. This means that my plan will only require me to mitigate those three risks (even though I would definitely mitigate more if I had the funds). And let’s suppose further that the auditor believes that I left some significant risks unmitigated by drawing the line where I did. Can he or she give me an NPV for this? And will my mitigation plan for this violation require that I go back and get more funds to address these risks?”

It’s interesting that you bring this up, since I have considered this question a good deal myself. I think the answer is that it all gets down to reasonableness. If you can demonstrate to the auditor that your organization really can’t afford to spend more on supply chain cyber security risk mitigation (e.g. there was a natural disaster that was very expensive for the utility, for which there’s a serious question whether you will be able to get rate relief), they will hopefully be understanding.

Of course, if we were talking about CIP-007 R2 here and you used the argument that you didn’t fully comply with that requirement because your organization couldn’t afford it, I don’t know of any way that the auditor could be lenient, whether or not he or she wished to. This goes back to the fact that CIP-013 R1 is a risk-based requirement, while CIP-007 R2 is prescriptive. Reasonableness isn’t something an auditor is allowed to consider when auditing a prescriptive requirement (unless we’re talking about a reasonable interpretation of a particular term in the requirement, or something technical like that), while it’s inherent in the idea of a risk-based requirement. I’ll discuss this further in the next post (or maybe the fourth post, if there end up being that many) in this series.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

My offer to NERC entities of a free webinar workshop on CIP-013, described in this post, is still open! Let me know (at the email address below) if you would like to discuss that, so we can arrange a time to talk.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.