Friday, May 31, 2019

An ex-auditor makes a great comment on vendor risk



Kevin Perry, former Chief CIP Auditor of SPP RE, retired last year, but still reads my posts (after all, what better way to spend your retirement?) and has often corresponded with me on them – as he often did while he was an auditor. He sent me the comment below, regarding my most recent post on CIP 13. He said:

I look at it this way...  the contract language or other documented agreements simply show what you agreed to, and doesn’t guarantee performance.  The RFI and other procurement solicitation documentation shows you tried, even if the vendor will not agree to your requests.  But what you really need to focus on is managing your own risk and not assigning it to the vendor.  What can you do to mitigate vendor risk, as opposed to what will you presume the vendor is doing to mitigate your risk?  If you approach the issue with an assumption that the vendor will fail, then your mitigation will be better than if you assume the vendor has your back.  It is not much different than network security between two companies.  You mitigate risk through mutual distrust...you assume something bad will get on your partner’s network and thus you build your own defenses at your perimeter.

I couldn’t have said it any better! And I would certainly have taken a lot longer to say it…


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Tuesday, May 28, 2019

Lew Folkerth’s latest article on CIP-013, part II



Three weeks ago, I wrote a post on Lew Folkerth’s latest article (i.e. his third) on supply chain cybersecurity risk management/CIP-013. When I set out to write it, I thought I could do this in a single article. This was silly, since for each of the first two articles in Lew’s series, I’ve written two posts. This one turned out to be the same, so here is my eagerly-awaited second post on Lew’s third article.

I want to point out that, in the first post, I gave directions for downloading Lew’s third article from the RF site, and said it would be a day or two before you’d be able to do that. Instead, it turned out to be a few weeks (Lew had been overly optimistic when he told me it would be up that soon; the process for putting it up includes legal review and…well, you know how that can be sometimes). In any case, you can download the article now. It is item 31 on the list you’ll see (i.e. the last one).

Lew did point out in this article that it won’t be his last on the subject, since there will be a fourth (presumably in the next issue of RF’s newsletter, which will be out in June). That one will be on the new versions of CIP-005 and CIP-010, which will come into effect when CIP-013 does (7/1/20, in case you don’t have that tattooed on your arm yet).

I prefaced the first post by saying that I think what Lew said in this article is all very good, but I do have some disagreements. I went on to list four of these disagreements, although I admitted that the last of them was really a criticism of the drafting team for being unclear. Regarding the last one, my interpretation of R1 differs from Lew’s, but neither of us can be said to be right or wrong, given the ambiguity in the requirement itself (although if you really want to go to the root of the problem, you need to blame FERC. They very unrealistically gave NERC only one year to a) develop the new standard; b) submit it to a vote by the NERC ballot body multiple times; c) get it approved by the NERC Board of Trustees; and finally d) submit it to them for their approval. The drafting team accomplished this, but they naturally didn’t have much time to ask themselves whether the wording was completely clear. Of course, once NERC submitted CIP-013 to FERC, the Commission then took 13 months just to approve it! So much for the big rush…).

My next disagreement with Lew is also at heart a complaint about the drafting team, since this is about another question for which there’s no right or wrong answer: What are the primary elements that need to be in your supply chain cyber security risk management plan, which is mandated by R1.1? Lew’s article lists three elements (which he calls “required processes”):

  1. “Planning for procuring and installing” (it would be clearer if this were followed by something like “systems in scope” - which are currently BES Cyber Systems, but will include EACMS and PACS in a couple of years. There also seems to be a big movement by FERC at the moment to include Low impact BCS in some way; it seems they’re being driven by Congress in this matter. However, the NERC ballot body will first need to approve a SAR, and before one can even be drawn up, NERC wants to submit a Data Request to the membership on Low BCS and analyze the results. So this isn’t any near-term likelihood).
  2. “Planning for transitions” (i.e. transitions between vendors. This is specifically referred to in R1.1).
  3. “Procuring BES Cyber Systems”

Lew says in his article that he thinks the first item (i.e. “Planning”) is the goal of R1.1 and the third (“Procurement”) is the goal of R1.2. In my first post, I explained one reason why I think he’s wrong on this: R1.2 doesn’t have any special purpose. It’s there simply because FERC, at random places in their Order 829, said that these six items should all be included in the new standard; the drafting team decided to group them all together in R1.2. My interpretation of the purpose of R1.1 is that it requires identification and assessment of five types of supply chain cybersecurity risks, which I’ll list in a moment. My interpretation of the purpose of R1.2 is that it simply lists six mitigations that must be included in the plan, but they are far from being the only mitigations in your plan! As Lew makes clear in his article, which concludes with a list of 13 important risks that he thinks NERC entities should consider in their plans, there are a lot of other important risks and mitigations to consider – not just the six (actually eight, since two of these have two parts) in R1.2.

But there’s another reason why I don’t think R1.1 is about planning and R1.2 is about procuring. Lew forgot that both R1.1 and R1.2 are simply callouts from R1 itself, which reads “Each Responsible Entity shall develop one or more documented supply chain cyber security risk management plan(s) for high and medium impact BES Cyber Systems. The plan(s) shall include…”.

In other words, your R1 plan must include two things. The first is a process for identifying and assessing supply chain risks to the BES (R1.1), while the second is the six specific items (mitigations) that FERC said must be included in your plan (R1.2). I think it’s a mistake to read anything more than this into R1.1 and R1.2.

As I said earlier, I believe there are five areas of supply chain security risk that need to be included in your R1.1 plan. They are all mentioned in R1.1, but I’ll admit that a couple of them are very well hidden:

  1. Procurement of BCS hardware and software. This is of course the one that everyone talks about; in fact, I’m sure most people now think that CIP-013 is all about this one area of risk. I agree it’s by far the most important of the five areas, but the entity needs to address the other four areas as well – although, since this is a risk-based standard, there’s no obligation for the entity to devote the same amount of effort to each of these five areas, if they don’t think the other four pose the same degree of risk as this one.
  2. Procurement of BCS services. R1.1 says your plan must identify and assess risks to the BES arising from “vendor products or services”. I’m surprised Lew doesn’t specifically mention services in his three required processes, but he certainly does so elsewhere in his article.
  3. Installation of BCS hardware and software. FERC made it very clear in Order 829 that they were almost as worried about insecure installation of BCS as they were about insecure procurement of them in the first place (they had the first Ukraine attacks in mind, which had happened seven months earlier. In those, a big contributing factor was that the HMI’s were installed directly on the IT network, which shouldn’t have happened had there been a proper assessment of installation risks). So R1.1 specifically states the entity should assess risks of “procuring and installing vendor equipment and software”.
  4. Use of BCS services. This isn’t specifically stated in R1.1, but I think it’s directly required by the words “procuring and installing”. You don’t “install” services, but you do use them. And I think it’s clear that FERC wanted this, since they mandated three items in R1.2 that involve risks from vendor services after they have been procured. R1.2.3 deals with vendor service employees who leave the company. R1.2.5 deals with patches provided by vendors, which is a service (although not one that most software vendors charge for). And R1.2.6 deals with vendor remote access, which is of course also a post-procurement vendor service. So these are three examples of using BES services.
  5. Transition between vendors. This is explicitly called out in R1.1, and it’s also on Lew’s list.

This next item isn’t a disagreement with Lew at all: I was very interested that he made a point of saying that, even though you have lots of freedom in drawing up your risk management plan in R1.1, when you get to R2, that freedom goes away. You must implement your plan as written, and if you don’t, you’ll potentially be in violation of R2. So, while you should certainly do your best to identify, assess and mitigate risks in R1.1 and R1.2, you do need to be careful not to promise to mitigate more risks than you will be able to handle. For this reason, I’ve advised my clients to be conservative in committing to mitigate risks in their R1.1 plans - e.g. they might just commit to mitigate those they rank as high, on a low/medium/high scale. If it turns out later that they decide there are other risks they should mitigate as well, they can always add them to the plan and not risk being in violation.

Side note: Someone who hasn’t already been put to sleep by this post (I’m sure that applies to one or two people at least, maybe three or four) will jump up and yell (loud enough for me to hear in Chicago) “You inserted the word ‘mitigate’! But that isn’t in R1.1!” And that person would be absolutely right. R1.1 doesn’t say anything at all about having to mitigate the risks you “identify and assess”. It’s as if the drafting team were saying “All we care about is that you know what your risks are. But no matter how big and hairy they are, rest assured that we don’t expect you to do anything about them. Once you’ve identified and assessed your supply chain cyber risks, you can forget all about supply chain security, throw that list in the trash can, and go back to your normal activities (perhaps CIP-002 through CIP-011 compliance, or perhaps just lying on the beach).”

However, this would make absolutely no sense. FERC didn’t order NERC to develop a supply chain standard just because they thought it would be a really interesting intellectual exercise for NERC entities to identify their supply chain risks; they did it because they thought (and still think) that supply chain is one of the most critical sources of cybersecurity risk worldwide and across all industries (although especially for the power industry, with the Russian attacks being exhibits A, B and C), and those risks need to be mitigated. They even ordered six specific mitigations be included in the supply chain security plans, which the SDT collected into R1.2. And literally every document that’s been written about CIP-013 (e.g. the SDT’s own Implementation Guidance) focuses on mitigating risks, not just on identifying risks in the first place.

So even though the word “mitigate” isn’t in R1.1, I definitely think you should read it in after the words “identify and assess” – i.e. the entity’s plan needs to identify, assess and then mitigate supply chain cyber risks to the BES, not just identify and assess them. I don’t think your CIP-013 plan will fare very well at audit, if it just list risks but says nothing about mitigating them!

Lew also points out that the advice to implement your plan exactly as written doesn’t apply in the case of prescriptive requirements, like most of those in CIP-002 through CIP-011.  He gives this illustration: “..if your personnel risk assessment process created by CIP-004-6 Requirement R3 says that you will perform personnel risk assessments every five years, but you miss that target by a year for some personnel, then that should not be a violation as you are still within the timeframe prescribed by the Standard.” (of course, that timeframe is seven years)

However, since your CIP-013 R1 plan is itself what you are required to implement in R2, this means that any significant shortfall in implementing it might be considered a possible violation of R2. My guess is a moderate shortfall won’t be considered a possible violation, but the auditors could issue an Area of Concern, asking you to fix this problem by the next audit. Either way, it’s better to simply do for R2 exactly what you said you’d do in R1. And the converse of this is that you shouldn’t commit to doing more in your R1 plan than you are sure you can accomplish in R2. I think this is the biggest source of potential compliance risk in CIP-013-1.

Near the end of his discussion of R2, Lew says “Both contract language and vendor performance to a contract are explicitly taken out of scope for these Requirements by the Note to Requirement R2. I recommend that you do not rely on contract language to demonstrate your implementation of this Requirement.” I both agree and disagree with these statements, but that takes a bit of explaining:

  1. I disagree that contract language and vendor performance (although that should really be “non-performance”. There’s definitely no risk that you’ll be held in violation if your vendor performs what they said they’d do!) are “taken out of scope” by the note to R2. You should still try your best to get a vendor to agree to contract language that you request, and you should still try to get them to do what they said they’d do.
  2. The note in R2 is really saying that neither the actual contract terms and conditions, nor the fact that a vendor didn’t do what they promised, can be the subject of required evidence for compliance; and if your auditor asks you to provide this evidence, you are within your rights to refuse.
  3. I doubt that Lew would disagree with the above two statements, but I think he’d still be missing the real point: It doesn’t matter how you document the fact that your vendor has agreed to do something. They might do this in contract language, they might give you a letter or an email, or they might just state it verbally. The big question is, did you verify whether they kept their promise or not?
  4. You do need to have evidence about what you did to verify that the vendor did what they promised. Maybe it will be a letter or emails you’ve saved. Maybe it will be notes regarding phone calls. Maybe it will be documentary evidence, like screen shots showing that the vendor digitally signed their software patches, or a vendor’s documentation of their procedures for system-to-system access to your Medium BCS. The best evidence will vary according to the particular promise the vendor made, but there will always be some evidence you can gather.
  5. Of course, if the vendor simply refuses to let you know whether they’ve kept their promises, you’re certainly not in violation because of that. But you very well may be in violation if you haven’t even tried to verify whether or not they kept their promises in the first place.
  6. Another point that goes beyond what Lew said: What happens if the vendor refuses to promise to do anything? Do you just throw up your hands and say “Oh well, we tried”, and move on to the next challenge? No. If you’ve identified a risk as being important enough to mitigate, you have to take some steps to mitigate it. If the vendor refuses to notify you when they’ve terminated someone who had access to your BCS, you can deny their employees any unescorted physical access to your BCS – which will raise costs for the vendor (and also for you, of course). If the vendor refuses to promise to improve their security practices in their software development environment, you can put their systems on the bench and scan and test them for a month or two before you install them – or better yet, you can stop buying that vendor’s software! There’s always something you can do to mitigate the risk in question, even absent any cooperation from the vendor.

But I do agree with Lew that there is far too much emphasis on contract language as being the best way to mitigate supply chain risks. For one thing, contract language just isn’t a good option for some entities (especially Federal government ones), who have little to no control over contract terms. For another, how many of you have a contract directly with Cisco? I would guess the answer is few to none, since almost nobody buys their Cisco gear directly from Cisco. You buy it through a dealer or integrator, and you probably have some sort of contract directly with them. But the dealer isn’t going to make any commitments on behalf of Cisco – and it’s Cisco that needs to give you a means of for example verifying patch integrity and authenticity, not the dealer.

A third example: If you ever buy something from Best Buy or eBay, I can promise that you’re not signing a contract guaranteeing the manufacturer has certain cybersecurity controls in place (there is a good discussion of contract language in the white paper on “Vendor Risk Management Lifecycle”, currently being drafted by the Supply Chain Working Group. These papers will be presented to the CIPC at their meeting in Orlando next week, and sometime after that, if approved by the CIPC, will be available on NERC’s website. If you want a preview of the argument in that document, you can send me an email).

So I was quite happy to see that Lew doesn’t subscribe to this mistaken view of contract language as the be-all-and-end-all of CIP-013 compliance. But I’m sorry to say that his reasons are wrong. If an entity wants to rely on contract language as their preferred means of documenting the fact that their vendors have agreed to implement certain security controls, that’s fine. Of course, the auditor won’t be able to  compel the entity to show the contract itself, but the evidence that’s really needed is the evidence that the entity made some effort to verify whether the vendor was keeping its promises, no matter whether they were inscribed on golden tablets and stored in Fort Knox or whether they were written in disappearing ink on a scrap of parchment in a bottle buried on the beach of a desert island. That is the evidence that’s important.

This concludes my analysis of Lew’s third article on supply chain cyber risk management and CIP-013 compliance. As soon as the fourth article on this topic comes out in June, you can be sure I’ll have something to say about that as well!

And by the way, Lew, I hope this won’t be the end of your articles on CIP-013, since there’s much more to be said. I know what, let’s make a deal…You can stop writing your articles when I stop writing blog posts about CIP-013. I think that will be around 2030. By that time, I figure everything will have been said that needs to be said.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Thursday, May 23, 2019

Just in case you thought the Russians were our friends…



Blake Sobczak of E&E News struck again yesterday, with another great article that points to the heart of the biggest cybersecurity threat faced by the power industry today: namely, the ongoing Russian campaign to penetrate the grid and plant malware in it. And as usual, he didn’t have to jump up and down to make his point – he merely quoted from an important government official.

This article unfortunately isn’t available outside of the paywall, but I’m of course free to excerpt from it. Here is essentially the first half of the article (which is quite short – something that’s unusual for Blake’s articles, as well as my posts. Both of us understand the importance of not letting a foolish concern with conciseness get in the way of saying what needs to be said!):

Russian hackers pose a greater threat to U.S. critical infrastructure than their Chinese counterparts, a former intelligence official warned water utility executives in Washington yesterday.

"When I think about the Chinese and the Russians, they're both dangerous: Both of those are in conflict with us," said Chris Inglis, former deputy director of the National Security Agency. "But the Russians are far more dangerous because they mean to do us harm. Only by doing us harm can they achieve their end purposes."

Beijing poses a major cyberespionage threat to U.S. companies but, in contrast to Russia's government, can be more effectively deterred based on its close ties to the American economy, Inglis said at a cybersecurity symposium hosted by the National Association of Water Companies.

"Why are the Russians, as we speak, managing 200,000 implants in U.S. critical infrastructure — malware, which has no purpose to be there for any legitimate intelligence reason?" asked Inglis, now managing director at Paladin Capital Group and a visiting professor at the U.S. Naval Academy. "Probably as a signal to us to say: We can affect you as much as your sanctions can affect us."

I was actually surprised to see this, since everything else I’ve seen or heard from the Federal government recently seems to downplay a) the threat posed by Russia’s ongoing attacks on the US grid and especially b) the success the Russians have had so far (of course, it’s probably significant that Mr. Inglis isn’t currently part of the government. The article mentions that he may lead the NSA in the near future, and if he does, I hope he doesn’t catch the strange bug that seems to have infested a lot of his former colleagues on the cyber ramparts of the US economy, which causes sudden muteness when asked about Russian attacks on the grid. I believe the medical community is racing to find the cause of this syndrome). He says two important things:

  1. The Russians’ purpose is clearly malign – to have the capability to cause significant disruption to our society (to say nothing of disabling US military bases - as described in a January article in the Wall Street Journal), and perhaps even to cause a cascading power outage that could immobilize a lot of the country; and
  2. They have already had a significant amount of success, evidenced by the fact that they are currently managing (i.e. the devices are already in place and connected to C&C servers) 200,000 “implants in U.S. critical infrastructure”, which presumably includes other CI industries like oil and natural gas pipelines, water treatment plants, oil refineries, and petrochemical plants, besides power facilities. 

I’m also very impressed with the fact that Mr. Inglis gives short shrift to the popular (again, in current Federal government circles) idea that the Chinese and Russian attacks on US critical infrastructure are essentially two peas in one pod. Here’s the quote again: "But the Russians are far more dangerous because they mean to do us harm. Only by doing us harm can they achieve their end purposes." Amen, brother. And he’s not the only person saying this: the Russians themselves are!

A paragraph after the above section, the article says “Energy and water utilities' interest in Chinese and Russian cyberwarfare capabilities has spiked since January, when U.S. intelligence director Dan Coats assessed that either country could disrupt U.S. critical infrastructure by cutting off a gas pipeline or temporarily disabling part of the power grid.”

You know, I’d almost forgotten about that! The Director of National Intelligence, as well as the heads of the FBI and CIA, went before the Senate Intelligence Committee in January to discuss their Worldwide Threat Assessment for 2019, which said “Moscow is now staging cyberattack assets to allow it to disrupt or damage U.S. civilian and military infrastructure during a crisis.”

In normal times, one would have expected this story to set off a frenzy of activity in the Federal government and the power industry to investigate what actually happened, so that the malware could be identified and rooted out, and so that defenses could be beefed up to prevent further penetration. But these are evidently not normal times, since despite my complaints (or perhaps because of them), there is no visible movement on the part of anybody with responsibility for grid security to investigate what the report says. This is in stark contrast to the Ukrainian attacks in 2015, which set off a firestorm of investigations, reports, classified and unclassified briefings, etc. Why am I concerned about this, you ask? After all, why would I expect the US government to treat the US and Ukrainian grids equally? I didn’t expect that, of course. But I kinda thought...you know…that they would be more concerned with the US grid than the Ukrainian one. Silly me.

So now we have Mr. Inglis putting a number on the problem, saying there are 200,000 implants already in place. This is about a thousand times more than I would have suspected. This will set off a real investigation, right?...Ya gotta’ be kidding.

To quote the ancient Greeks, “Those whom the gods wish to destroy, they first make mad.”

  
Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Tuesday, May 21, 2019

Our RSA panel – recording available!



I must confess that, at least two months after it was available, I just listened today to the recording of the panel that I was on at the RSA conference in early March this year. Our title was “Supply chain security for critical energy infrastructure” We had all agreed afterwards that it went very well, and guess what…I can now confirm it went very well!

The panel members were the same ones as the panel I was on in 2018: Dr. Art Conklin of the University of Houston (O&G security expert), Marc Sachs, former NERC CISO and head of the E-ISAC and me. What was different was that our moderator in 2018, Mark Weatherford, had to bow out and was replaced this year by Sharla Artz, VP of Government Affairs, Policy & Cybersecurity of UTC.

Last year, we were told by the conference that some reviews pointed out that we agreed with each other too much, so it made the session kind of boring. Even though we didn’t consciously try to pick fights with each other, there was lots of disagreement (friendly of course). But more importantly, I think the content is very good, both in the panelists’ discussions and the Q&A afterwards (where we had some really good questions). You may want to listen to the recording: there are a lot of good points about supply chain security, CIP-013 and cyber regulation in general.

Plus a lot of humor. Marc had had neck surgery recently because – as he said – he’d jumped out of too many airplanes when he was in the service, so he was wearing a neck brace. There were various jokes that the real story was that we’d gotten into a fight at a bar the night before, when we met to discuss the panel (that’s patently not true. We didn’t meet in a bar) - although I helpfully pointed out to Marc afterwards that the next time he jumps out of an airplane, he should wear a parachute! He thanked me for this good advice, but said his doctor says no more jumping out of airplanes.

My favorite part (which I didn't remember until I listened to the recording) was around 15:30 in the recording, when Art told a great story about risk management. He said that the security people at DoD had wanted to spend a lot of money (and since we're talking about DoD, I assume this is a whole lot of money) on some sort of widget that would solve some security problems for one part of the organization.

When they went to the higher levels of DoD to get the funding, they were asked whether there was some way in which DoD could spend the same amount of money - or even less - and mitigate a greater amount of risk. The security people answered "Sure, we could upgrade the whole Department to Windows 10 and finally get rid of all the old versions that are hanging around, causing security nightmares." But they were told "Oh no, we can't do that. It would be too hard."

So DoD went with the widget solution and spent more money mitigating less overall risk, because it was easier for them to do. This is a great example of why any security program should start with a risk assessment, and focus resources on the threats that pose the most risk; it is only by doing this that the entity can be assured of getting the most bang for their buck, in terms of total risk mitigated. And guess what! Not only is this the best way to comply with CIP-013, you're actually required to do this by R1.1!

All four of us are hoping we’ll be chosen to do the panel again next year, and that we’ll all be able to participate again. I think the group has developed a great collaboration style, so that the discussion is both very entertaining and very informative.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Wednesday, May 15, 2019

A great paper on software security



The NERC CIPC Supply Chain Working Group is now preparing six white papers on supply chain cyber security risk management, that will be presented to the CIPC members (and anyone else who wants to attend) before the next CIPC meeting, the morning of June 4 in Orlando. As I anticipated when I joined the committee, the white papers will be nice to read in their final versions, but the best benefit of participating in developing them is the meetings themselves, at which some very knowledgeable people (along with me!) have very good discussions that usually go far beyond the subject matter of the white papers.

The papers are constrained to be around three pages long, meaning their purpose is mostly to point out issues in general terms, rather than to solve them. However, I can promise they’ll all be excellent. The SCWG will be meeting on the afternoon of June 3 to discuss the way forward, including whether it should continue developing more papers, since – surprisingly enough – there are far more than six topics that can be discussed in the area of supply chain security for the BES!

One of those groups, which is developing a white paper on risks associated with open source software, is led by George Masters of SEL. Last week he said he’d just read a very good paper on software security and recommended we all take a look at it. I did, and I completely agree with him that it’s excellent.

Of course, you can decide for yourself what you think of the paper, but I highly recommend you at least look at it. You’ll notice that the authors are quite critical of the software industry in general. In my opinion, they overstate their case in implying that software consumers are being duped by software developers who don’t make much effort to find vulnerabilities in their software before shipping it out, since they can always find them later and patch them.

Even if it’s really that bad, I – speaking as a software consumer who has approved many EULA’s that I haven’t even looked at – don’t at all feel I’m being misled. I know quite well that there will be various vulnerabilities in the software, and new ones will be found all the time. I also know that a bigger diligence effort by the developers would probably be able to make a big dent in the problem. However, I also know that this will probably have a sizable impact on the price of the software. Am I willing to make that trade-off? Fortunately, I have no other option. The market as a whole has already made the choice for cheaper software with more potential vulnerabilities. They can either buy it or not. But they’re not being totally misled in making that choice.

I also feel the authors are exaggerating the duplicity of the developers. Take Microsoft. I can remember in the 90’s and early 00’s when their security level was somewhere around zero, if not less – Windows 95 comes to mind. But they made a very concerted effort to change that, and I don’t feel at all insecure using Windows 10.

In any case, this is definitely a paper you should look at, and hopefully read. 


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Tuesday, May 14, 2019

What we know about the grid attack, and more importantly what we don’t



On April 30, I wrote a post pointing to a story by Blake Sobczak of E&E News about what seems to be the first reported cyberattack that affected power grid operations (as opposed to say an attack on some utility’s web site). I updated this with two more posts that week, which you can find here and here; the second of these two posts is based on a new article by Blake.

Even though there were still big questions about the attack and Blake came out with a new article last week, I haven’t written about it since then. This isn’t because there was nothing more to say, but because I just didn’t have time to say it (and because there had been another momentous – at least in my book – event in the power industry earlier, namely the publication by Lew Folkerth of RF of a new article on CIP-013. I wrote about that in my only post last week).

In fact, after my third post on the attack I had a very good discussion online with four people who are very knowledgeable about securing the grid against cyber attacks. All are longtime industry cyber professionals; three work for power market participants, while the fourth works for an important industry equipment vendor. One is Trevor Hodges of Tesla Energy, although the other three prefer not to be named.  These discussions, along with Blake’s most recent article, allow me now to summarize what I think has been either stated publicly so far, or can reasonably be inferred from those public statements. This certainly doesn’t answer all questions about the attack, but it at least lets the remaining questions be much more targeted.

First, my three posts two weeks ago were all focused on who could be the entity that was the victim of this attack (and if multiple entities were attacked, there would presumably have been multiple OE-417 reports). I was asking this not because I consider it vital to know this information for its own sake, but because there were still big questions about what kinds of assets were actually affected by this attack (or more specifically, what kinds of assets had their communications affected by the attack) – knowing the entity would help to understand the attack itself.

It had seemed to me and others that Peak Reliability, the Reliability Coordinator for most of the Western US, including the four counties in California, Wyoming and Utah that were specifically mentioned in the attack, was the logical victim of the attack. But that was ruled out in Blake’s second article, which quoted Peak as denying this (and there’s no reason to believe they wouldn’t tell the truth about this).

That meant that the victim must have been either a utility or an independent power producer (with their own control center). Blake’s second article quotes a DoE official as saying the attack was on an “electric utility”, which rules out IPP’s, as well as Peak. But since there’s no utility whose service area covers all four affected counties, this means it is probably a utility whose control center – besides being a Balancing Authority for one or more regions – controls generation assets in those counties.

The only problem with this theory is that there’s no obvious utility that meets this definition. One of my four friends did a little checking in public reports and identified a utility that controls generation in three of the four counties – the fourth being Los Angeles County - but none that controlled generation in all four. He forwarded the name to me, but that utility has also denied that they were the one, so it looks like we’re at a dead end on the utility name (although I wouldn’t reveal it anyway, unless it were already public).

So maybe the utility was just one of the nodes attacked, and through the network the attackers went after the generators – but the latter aren’t owned by the utility. Whatever. But we now have at least enough information to put together a rough scenario for what went on.

The attack seems to have been initially against the control center of a large Western utility (which probably wasn’t located in one of the four counties reported to DoE for the actual event). And the NERC E-ISAC has reported that the attack was on Cisco ASA firewalls (or routers), using a vulnerability that had been reported in June 2018.  This attack can cause a “Denial of Service” condition, which in this case meant the device rebooted, cutting off communications to (presumably) the three or four generating plants (the words DoS were initially thought by some, including me, to mean that the attack itself was a DoS, or even a DDoS, but it seems clear now that the E-ISAC just meant that the attack caused the ASA’s to reboot).

As had been stated from the beginning (in the OE-417 report), there was no loss of generation, meaning none of the plants went down, even though they lost communications with their BA. Loss of control center communications is a very common occurrence in OE-417 reports, but the difference in this case is that the cause of this loss was a cyber attack.

This is pretty much what has been publicly stated. Here are questions that I feel should be answered, so the industry can file this in either the “OMG, this is really serious. We have to get on this right away!” file, the “S__t happens. Get over it” file, or – most likely - something in between.

  1. It seems the attack came from the internet, but then the question becomes why the ASA’s were directly accessible from the internet in the first place, since the utility in question could have had their own private network and wouldn’t have had to be exposed like that.
  2. However, many control centers rely on public internet-based VPNs for communication with generators, usually because of the big cost savings compared to private carrier solutions. If so, the VPN endpoints could have been the ASAs that were attacked. And if this is the case, utilities need to know this, since they need to update their risk estimates for using VPNs for control center communications.
  3. But there’s also the possibility that a private VPN to the generating plants was cut off, because the same router or firewall was used for internet communications (presumably to the utility’s IT network, since a control center should never be directly connected to the internet) – and that was where the attack came from. In other words, the attack came in from the internet and only brought the communications with the plants down because the router happened to be on the same physical device; this seems pretty hard to believe, since it’s hard to see a utility springing for an expensive private network, then cheaping out by not buying a separate ASA for their internet connection. In any case, any entity doing this should carefully consider the risks of continuing to use the same device for both IT and OT purposes, especially when the IT network involves a direct internet connection.
  4. And there’s at least one other possibility for this: The ASA was being used for interactive remote access (through VPN on the public internet) to systems in the control center, as well as for communications with the plants. Again, the same risk assessment is needed if any utility is doing this now.
  5. The E-ISAC report states that the communications outage was only five minutes, yet the OE-417 states that the attack continued for nine hours. What does this mean? Did someone keep forcing the ASA to reboot – and sever the VPN each time – over a period of nine hours? Maybe the attacker had reached the end of his shift, and the next guy didn’t start until eight hours later? If the problem was solved in five minutes, it’s hard to understand why the utility would say it lasted for nine hours.
  6. What about the motivation of the attacker? One of my friends stated that it couldn’t have been someone deliberately targeting damage to the grid itself, since simply shutting off communications with a few power plants – which definitely can’t bring them down by itself – wouldn’t itself affect the grid. In fact, bringing communications down would almost never result in the plant itself going down, although it would mean that the people in the plant would have to make sure they had other communications active with the control center (which they need to have anyway, of course).
  7. However, one of my other friends pointed out that the attacker could have been trying to use the attack to gain information from the ASA, not to actually cause it to reboot; the CVE detail says “It is also possible on certain software releases that the ASA will not reload, but an attacker could view sensitive system information without authentication by using directory traversal techniques.” In other words, the attacker(s) were doing reconnaissance for a much more sophisticated attack at a later time, but they were set back when the ASA unexpectedly rebooted. It’s not at all certain that this wasn’t a targeted attack, even a sophisticated one.

The bottom line here is that much more information is needed about this attack, so that electric power entities can learn what they need to do to prevent this from happening in the future, and to re-assess risks to BES communications (which of course aren’t covered by NERC CIP currently). I certainly hope that the E-ISAC will report their findings when they’ve finished their analysis.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.


Sunday, May 5, 2019

Lew Folkerth’s new article on CIP-013, part I



Lew Folkerth of RF has followed up his two previous articles on CIP-013 (which I wrote about in four posts, found here, here, here, and here) with a third one in the RF newsletter that came out a week or two ago. You can read the article by downloading the newsletter and clicking on The Lighthouse in the bar on the left, or you can go here to RF’s new CIP web page and click the plus sign after “Standards and Compliance.

When you do that, you will see a set of PDF’s of all 30 of Lew’s Lighthouse articles on CIP and cyber security since 2014. Having the articles separate from the rest of the newsletter is a huge help, since a) in order to see one article, you don’t have to download a 13 or so MB file of the entire newsletter, but a <1 MB file of Lew’s article; and b) since he’s titled each article, you don’t have to download every newsletter to even find about what he’s written about. I recommend you look at the whole list and download the ones that interest you; they’re all good, I can assure you.

Articles 29 and 30 are the first two that Lew wrote on CIP-013. As of today (Sunday), the third article isn’t up yet, but Lew thinks it will be up in a day or two. If you don’t want to wait that long, you can download the newsletter itself.

I noticed that rather than being billed as the third article on CIP 13, the article is called “CIP Supply Chain Cyber Security Requirements in Depth (Part 1 of 2)” – so there will be another one after this (I assume it will be in the newsletter issued in June). Of course, this is great news for both Lew and me, since we’re each paid $1 a word to write about CIP 13, and I easily write three words for every one of his – so both of us may get rich just writing about CIP 13! J

Now to the article itself.  As always with Lew’s articles, it’s very good – and I want to emphasize that Lew is the only person in the entire NERC ERO that actually publishes articles about CIP compliance, so that makes the articles especially valuable. I recommend you read it carefully if you’re involved in CIP-013 compliance. In contrast to his first two articles, which focused more on supply chain security risk management in general rather than CIP 13 in particular, this article (and presumably its successor) is laser focused on complying with CIP 13. He goes through it requirement part by requirement part.

However, I do have some differences with Lew on various points. I’ll list them in the order they appear in the article.

My first objection applies to his paragraph on the first page of the article (p. 8 of the newsletter), which states that

“You will have fulfilled the security objectives of CIP-013-1:
·         If you integrate vendor and product security considerations into your vendor selection process,
·         If your future acquisition contracts work to mitigate the cyber security risks posed by your selected vendor, and
·         If you manage the relationship with each of your vendors, present and future, to mitigate risks you identify as applicable to the vendor.”

I really don’t understand this. Yes, these are three important objectives of CIP-013, but there are many more – and you can find a good discussion of all of them if you start reading at the next paragraph about R1 and continue through the end of the article on the third page, with the discussion of R3. Everything in the rest of the article is a “security objective of CIP-013-1”. I suggest you use this wider list as your statement of the objectives for CIP-013.

Another point from the first page: Right below the section I just quoted, Lew states that it would be a good idea to include Medium and High impact EACMS along with BES Cyber Systems, in your scope for CIP-013; he says this because FERC ordered that EACMS be included in the next version when it approved CIP-013 last October. However, in a paper on “Cyber Security Supply Chain Risks” that NERC is currently drafting, they say they would like to see PACS included in the next version as well, meaning it’s highly likely the new SAR will include both EACMS and PACS. So rather than wait the two years or so before the next version becomes mandatory to include EACMS and PACS in your CIP-013 program, I recommend you include it now and not have to worry about retrofitting these later on.

At the top of the second page of the article (p. 9 in the newsletter), Lew says “Addressing your identified risks will probably include some additions to the terms of any contract you use for acquiring BES Cyber Systems and systems or services related to BES Cyber Systems.” He goes on to list two documents on procurement language, the first from DHS in 2009, the second from DoE in 2014 (and it was really the NERC CIPC that developed this document. Ed Goff of Progress Energy led the development for the CIPC and did an excellent job. I regard his departure for the banking industry a few years ago as a significant loss to grid cyber security in general. After all, what do banks have to protect that’s as important as electric power? Oh right…money. But where would banks be without power? Answer: nowhere). I would also add the EEI procurement language document released in March.

However, I have a problem with recommending any pre-drawn set of procurement language provisions. To simply require a vendor to include any language in a contract is putting the cart before the horse. This is because the right way to address supply chain security (and CIP 13 compliance) is to first identify the risks you want to mitigate. There’s close to an infinite number of risks, so you certainly can’t mitigate any but a small number of them. You should identify the biggest risks that apply to your organization’s BES assets (remember CIP-013 is all about risks to the BES, not to your organization), and mitigate those.

This is why Lew says at least a couple times in this article (and has said in his previous articles) that the NERC entity needs to “identify and assess” supply chain risks. It’s also why Lew includes a great sidebar in blue on the last two pages of the article that describes 13 risks you should consider including in your CIP 13 plan. He carefully says this list is a “starting point” – the final list is for you and you alone to determine, although you need to consider a lot of possible risks as you decide which ones you’ll mitigate.

So if you don’t know which risks you’re going to mitigate in your CIP-013 plan, how can you say that a particular set of procurement language is good for you? Each provision in each of the two documents that Lew lists (and the EEI one I linked above) is based on a particular supply chain risk – in other words, procurement language is one way (but only one way) to mitigate particular procurement risks. If you choose one procurement language document as the basis for your contract language, you’re outsourcing the job of deciding what risks you’ll address to whoever prepared that document. First decide what risks you’ll address, then look for procurement language that will properly mitigate those risks.

My next objection is really for the CIP-013 drafting team. It seems they really tripped over themselves in wording R1, and Lew’s summary of the two parts of R1 doesn’t clear that up. On the second page of the article, under “Part 1.2”, Lew says “Your supply chain cyber security risk management plan must also include a process for procurement of BES Cyber Systems. Note that Part 1.1 requires processes to be used in planning for procurement and transitions; Part 1.2 requires a process to be used in actually procuring systems. These will probably be different but related processes.”

I have heard this said before – that R1.1 is for planning, while R1.2 is for actual procurement. It’s understandable that people should believe this, because R1.1 starts out with “One or more process(es) used in planning for the procurement of BES Cyber Systems to identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services …” But that’s not the beginning of the sentence. The sentence begins at the end of the “preamble” section of R1, which says “The plan(s) shall include…” If you stick these two together (which of course is how they’re meant to be read), you get “The plan(s) shall include one or more process(es) used in planning for the procurement of BES Cyber Systems to identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services …”

In other words, you need to prepare a plan to include a process for planning. But that doesn’t make sense; you don’t plan a process for planning something, you plan something. This is why I proposed in one of my first posts analyzing CIP-013 in 2018 that R1.1 be read as “The plan(s) shall include one or more process(es) for the procurement of BES Cyber Systems to identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services …” followed by the rest of R1.1.

If you read R1.1 this way, you no longer get the idea that R1.1 is just about planning, while R1.2 is about actual procurement. The preamble to R1 makes clear that the whole requirement (R1.1 and R1.2) is about planning; the implementation of the plan doesn’t happen until R2. R1.1 requires your plan to include identification and assessment of risks. But then what does R1.2 do?

It’s a waste of time to try to ponder why the drafting team wrote R1.2. R1.2 includes six particular items that FERC, in their Order 829 in 2016 (which ordered the standard to be drafted), specifically said need to be included in the standard (they mentioned these in different places in the Order). The SDT included them because FERC said to do so; that’s the only reason why R1.2 is there. While, to be sure, these are all worthwhile things to do to mitigate supply chain risk, it’s not like they have some sort of special status over all other things you need to do to mitigate supply chain risk.

So how you do I understand R1.2? I read it as a list of six classes of mitigations (actually seven, since R1.2.6 includes two parts). Each of these mitigates a particular supply chain security risk. For example, R1.2.4 reads “Disclosure by vendors of known vulnerabilities related to the products or services provided to the Responsible Entity”.  This mitigates the risk that “The entity isn't told about a product vulnerability that is known to the vendor. An attacker takes advantage of that vulnerability to take control of a BCS and damage the BES.”

In other words, R1.2.1 through R1.2.6 describe seven risks that need to be included in your plan. But these are different from other risks. As I’ve already said (and as Lew has said repeatedly), there’s no way any organization can mitigate all the risks it faces; it has to choose the small number that are most important and mitigate those. But these seven risks have to be mitigated, because FERC said they have to be. Essentially, you need to push all of these to the top of your list for mitigation. You should include other risks in your plan as well (otherwise, R1.1 wouldn’t be there at all), but these seven aren’t optional.

So R1.1 and R1.2 aren’t doing different things – they’re both about identifying risks. The risks you identify in R1.1 need to be assessed for magnitude of risk, then listed by magnitude; the highest-magnitude risks should be mitigated. But the seven risks in R1.2 don’t get assessed because they’ve already “made the cut” for mitigation. They automatically go to the head of the line. Something like an election where you’re free to choose any 12 candidates from a bunch of names, but you must choose a particular seven candidates because they’re part of the ruling party, which wants to maintain its power no matter what. As we say in Chicago, the fix is in.

There’s one other big point that I discovered in 2018 when I started digging into CIP-013: R1.1 left out an important word – “mitigate”. Yes, folks, you’re told in R1 to develop a plan to identify and assess risks – and in R2 you’re told to implement that plan. But what do you do after you’ve identified and assessed the risks? Nothing. It’s like you’re supposed to say “Yeah, we found some big, hairy risks out there – and they’ll probably cause some real problems for us in the future. But now we’re all done with CIP-013 and we can go about our business as we did before.” Because the drafting team forgot to say you should mitigate those risks.

But don’t try this at home, kids. If you develop an R1 plan that requires you just to identify and assess risks, and you implement that plan in R2, don’t smugly push your list of risks in the face of your auditor and point out that the word “mitigate” isn’t in the requirement (in fact, it’s not in the whole standard). CIP-013 makes no sense unless the risks you identify are mitigated – and everything ever written about it assumes the standard is about risk mitigation. You need to read “identify and assess” in R1.1 as “identify, assess and mitigate”.

As has been the case whenever I’ve written about one of Lew’s articles (but especially the CIP-013 ones), I can see now that it will take multiple posts to say everything I want to say. I’ll break off now, and hope to come back with part II in a week or two. But there are a few other things I’d like to write about in the next few posts. I’m sorry to leave you in a state of suspense. You’ll have to survive until part II comes out somehow…


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a N the same address.

Thursday, May 2, 2019

Some clarity, but also a deeper mystery



Blake Sobczak of E&E News came out with another very good article on the “cyber event” that was reported on form OE-417 on March 5 by some entity in the West – and if you haven’t read my previous two posts, you should do so before going any further in this one. I think all of my readers can rejoice that both of those posts, as well as this one, share one important feature: they’re all short!

Today’s article clears some of the fog about this event, but it also deepens it. It makes very clear that this event, which involved a still-unspecified grid disruption, was a cyber attack – specifically a Denial of Service attack, although I realize that term can apply to a host of different attack types. Moreover, the attack took advantage of a vulnerability for which a patch had already been distributed by the vendor (whoever that is, and whatever the product is that was attacked). The source of this statement is a “DoE official”. Since OE-417 is DoE’s form, that seems like a pretty authoritative source to me.

The mystery is who the entity is that was attacked – and reported it. It had to be an entity with connections in two counties in California (Los Angeles and Kern Counties. Kern County includes Bakersfield), Converse County in Wyoming, and Salt Lake County in Utah. Two days ago, it seemed to me – and others I talked to about this – that only one entity, Peak Reliability, would have that sort of footprint[i]. Peak (formerly part of WECC) is the Reliability Coordinator for 14 Western states, including all three states that were attacked.

However, in today’s article, Blake quotes Peak as saying they weren’t attacked, which leaves…no possibilities that I know of. Sigh.

But we do now know that the North American grid has had the first reported disruption due to a cyber attack, although no load was lost. So we’re still a little ahead of the Ukraine, since there was not only disruption but load loss there. As my old boss used to say, thank God for small favors. 


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

[i] Actually, I can think of another entity, the Western Area Power Authority, which distributes power from Federally-owned dams across the entire West. But two days ago, WAPA denied they were attacked. Blake (or someone he talked to on Tuesday) also identified Berkshire Hathaway Energy as a possible vector for this attack. BHE has utilities that cover both Utah and Wyoming, but not California. BHE has a renewables unit in California, but in Blake’s article today, the DoE official says no generation was affected (Bill Lawrence, Director of the E-ISAC, said the same thing in a quote in my post yesterday), and that’s what that unit does. In any case, BHE also denies they were attacked.

Wednesday, May 1, 2019

More on the attack



I received two interesting comments today, from quite knowledgeable people, about my post about what looks like a cyber attack that disrupted grid operations – without causing an outage – in four counties in the West in March.

The first comment was from Bill Lawrence, Director of the NERC E-ISAC and VP/CSO of NERC. In an obviously very carefully worded statement, he says “The E-ISAC is aware of this event which did not show any impact to generation and did not cause electrical system separation. If more details emerge, they will be shared with our members.”

Of course, as with any official statement like this (and Bill said something similar to me yesterday before I wrote the post, but since he couldn’t make it official at the time, I couldn’t include the statement yesterday), what can be even more important than what is said is what isn’t said. Note:

  1. By saying there was no generation impact, he seems to be leaving on the table the idea that there was transmission impact. And frankly, I’m not very worried about purely generation attacks – transmission is pretty much the whole game if you’re trying to cause serious damage to the grid. My translation: This could have been a serious attack, but we lucked out this time.
  2. And saying the attack didn’t cause electrical system separation, which would be quite serious, is like telling somebody that their mother had a car accident and went to the hospital, but not to worry because she has almost all of her major organs still intact. It doesn’t give you a particularly warm and fuzzy feeling, to say the least.
  3. What Bill definitely didn’t say is something like “We investigated, and this was purely a case of an operator pushing the wrong button, nothing more. We all went out and had a drink and then caught our flight home.” So I think it’s very likely this was a cyber attack.

The other email was from a longtime industry observer, who said:

“If it was Peak Reliability, my best speculation would be a disruption of ICCP communications between the RC (Peak) and several of its BAs and/or TOPs.  Peak does not directly control BES assets, but loss of ICCP would impact its ability to perform situational awareness functions.  This would be different than a loss of Peak’s ICCP or SCADA/EMS that would have impacted its entire reliability area (and a complete loss of monitoring categorization in the OE-417).

“That said, I would have expected one or more entities who have (and would have lost) ICCP associations with Peak to also have reported, since they get real-time data from their neighbors over ICCP, but who knows...”

ICCP stands for Inter-Control Center Communications Protocol – although I just realized that has too many C words. It’s the international standard that control centers use to communicate with each other. The observer is saying:

  1. If Peak Reliability was the original target of the attack, the disruption would have to be in ICCP, since Peak doesn’t actually control any BES assets (as an ISO would). Therefore, this was inherently a less serious occurrence than if – say – an ISO or even a major Balancing Authority had lost SCADA for their whole control area, which would have triggered an OE-417 category of “Complete loss of monitoring or control capability at its staffed Bulk Electric System control center for 30 continuous minutes or more." As I mentioned yesterday (and it was this same industry observer who pointed that out to me), this category of event gets reported very often.
  2. However, if what happened is that Peak lost ICCP connections with three entities that it monitors, the observer is surprised that those entities wouldn’t themselves have filed an OE-417 report.
So is this a case of “Move along. Nothing to see here”? Not at all. If this is really the first cyber attack that disrupted grid operations in North America – even if it didn’t cause any loss of load or loss of control of assets – that’s a very big deal in itself.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.