Sunday, December 9, 2018

RSA 2019



I recently received the good news that I’ll be participating in the RSA Conference again next year. As I was this year, I’ll be one of three panelists on a panel – our topic will be “Supply Chain Security for Critical Energy Infrastructure”. This year the conference is from March 4-8; as always, it’s at the Moscone Center in San Francisco. Our panel is on Wednesday March 6 from 8:00-8:45 in Moscone South 204 (it doesn’t appear on the conference website yet, but will soon).

This year’s session was well received, with a lot of good audience interaction – and that’s good, because the three panelists are all returning next year, although the moderator is different. The topic this year was “How can we regulate critical energy infrastructure”. However, based on audience questions, the session turned into a very good discussion about grid security in general - and there’s nothing wrong with that!

This year, the panelists, besides me, will be Marc Sachs, former NERC CSO and head of the E-ISAC, and Dr. Art Conklin of the University of Houston, noted author and speaker on ICS security for the energy industry. The moderator will be Sharla Artz, VP of Government Affairs, Policy and Cybersecurity for the Utilities Technology Council. Here is our description of the session:

The purpose of this panel is to have an interactive dialogue between panelists and audience members on some important questions regarding supply chain cyber security for critical energy infrastructure (CEI). We will pose a series of questions, and as each question is asked, both panelists and audience members will be able to respond. While it is unlikely that a definitive answer will be reached on any of these questions, it is important to hear as many different answers as possible!

The panelists will bring a diverse set of perspectives to this discussion, based on their backgrounds in electric power, natural gas, water, petroleum refining and transport, and chemicals. It is hoped that audience members will bring many other perspectives to the discussion, especially if they are from other industries – finance, insurance, retailing, etc. – in which supply chain security is as important as it is in critical energy infrastructure.

The session will open with examples from the panelists of supply chain risks to energy systems. After that, possible questions to discuss include:

  • What are currently the primary vectors for supply chain cyber attacks?
  • How can we put in place a program to manage supply chain cyber risk?
  • How can CEI organizations gain assurance that vendors have good cyber security practices in place? Do most other organizations require assessment or certification by an outside party, or are there alternative means to gain this assurance?
  • What usable controls frameworks are available to help my organization understand supply chain cyber security risks?
  • What is the role of contract language? Is it a) always, b) sometimes or c) never advisable to insist that the vendor agree to certain contract terms?
  • We will have to comply with NERC CIP-013, which requires that we develop a supply chain cyber security risk management plan. How does the plan we need to develop for CIP-013 compliance differ from the plan that we would develop if we were addressing supply chain cyber risk in the absence of regulation?

I hope to see you there!


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.

Friday, December 7, 2018

More on implicit requirements: Lew’s article



In my last post, I went back to a topic I’d only touched on briefly in a post a few years ago, but which is certainly an important concept to understand if you are working on NERC CIP compliance nowadays: implicit requirements. I mentioned that I first heard the term from Lew Folkerth of RF in a presentation in October 2015.

But I neglected to also mention that Lew wrote an article for the RF newsletter on this subject in December 2015[i]. I just went back and reread it. To my great surprise – and I just now took a quick look out the window to make sure there weren’t any dark clouds that might strike me down with a bolt of lightning – I actually find myself disagreeing with Lew.

This article is done in a Q and A style, although I suspect the Q’s and the A’s were both written by Mr. F (nothing wrong with that, of course). In his first A, Lew says that implicit requirements are due to the fact that CIP v5 was written as “results-based Standards”. He goes on to explain “In a results-based Standard, the desired end result is specified, with the method of achieving the result left unspecified. This provides great flexibility in how the result is achieved, but one effect is that some actions that are actually required are not explicitly stated in the Standard.” In other words, he’s saying that implicit requirements are actually a virtuous byproduct of the drafting team’s high-minded purpose of making all of CIP objectives-based.

First off, the CIP v5 standards aren’t results-based or anything-else-based. Some of the requirements are results-based (CIP-007 R3 and CIP-011 R1, to name two). But these aren’t the ones with a whole set of implicit requirements built into them. Implicit requirements are only a problem when the underlying requirement is prescriptive. In my last post, I provided several examples of prescriptive CIP requirements that contained lots of implicit requirements. The basic pattern that leads to a lot of implicit requirements is that the base requirement uses a number of words that each then lead to one or more implicit requirements.

To show what I mean, let’s turn to CIP-007 R2, my poster child for a prescriptive requirement. R2.2 says “At least once every 35 calendar days, evaluate security patches for applicability that have been released since the last evaluation from the source or sources identified in Part 2.1.” Let’s say you’ve never seen this requirement before, and you’re trying to understand what you will need to do to comply with it. You’ll ask a set of questions, and the answers in most cases will lead to an implicit requirement:

Q1: How do I know which security patches have been released since the last evaluation?
A1: You approach the patch sources you identified in Part 2.1. This leads to the first implicit requirement:
IR1: Identify patch sources to approach regarding new security patches.

Of course, each implicit requirement usually leads to a new question.
Q2: I have a lot of patch sources. Do I need to approach them all?
A2: No, you just have to approach the patch sources for the software and firmware packages that you currently have installed on systems within your ESP.

Q3: How do I know what those software and firmware packages are?
A3: You take an inventory of all software and firmware currently installed on systems in your ESP.
This answer leads to another prescriptive requirement:
IR2: Every 35 days, take an inventory of all software and firmware currently installed on systems in your ESP.

Q4: Do I need to inventory anything else?
A4: Yes, you need to list the version numbers. Of course, this leads to:
IR3: Include version numbers in your inventory.

Q5: Anything else?
A5: Yes, now that you mention it. If the software includes various third-party applets, you need to inventory those as well. This leads to:
IR4: Include all applets for any software package in your inventory.

Q6: OK, now that I know what I need, what’s my next step?
A6: You’re ready to go. Now you need to find out from each patch source if they have a new security patch this month. This leads to:
IR5: Ascertain from each patch source identified in Part 2.1 whether a new security patch has been released.

Q7: What do I do now?
A7: You evaluate each patch for applicability.

Q8: And what’s applicability?
A8: A patch is applicable if it can be applied.

Q9: Well, that’s really helpful! So what patches can be applied?
A9: There are a lot of factors that determine applicability. For one, a patch is applicable if it is for a software or firmware package that is installed on at least one system in your ESP, taking into account the version number and any possible applets. This leads to:
IR6: Discard as inapplicable any patch that doesn’t apply to a software or firmware package that is installed on at least one system in your ESP, taking into account the version number and possible applets.

Q10: Does anything else determine applicability?
A10: Well, yes. What if you’ve already installed the patch? And what if you’ve installed a different patch that also closed the vulnerability that the current patch addresses? In both cases you shouldn’t install the new patch……

Q11: I’m getting a headache…
A11: That isn’t a question. If you’re looking for headache remedies, I recommend name of product removed.

OK, that’s enough. We’ve listed six implicit requirements so far, and A10 will lead to at least two more. Most important, we’re just getting started on “applicability”, since there are all sorts of other considerations that would lead one to declare a patch to be applicable or not. So it turns out that just this one part of CIP-007 R2 (and the requirement has four parts) has well over eight implicit requirements. It’s not out of the question that each of the 40-odd CIP requirements, leaving out CIP-002 R1 for the moment, has between 5 and 50 implicit requirements, leading to somewhere between 200 and 2000 implicit requirements in CIP.

And I deliberately left out CIP-002 R1, since – as I mentioned in my last post – just the “bright-line” criteria in Attachment 1 could easily lead to almost an infinite number of implicit requirements. But I won’t try to justify this statement, since it’s like delivering an ocean instead of a lake, when all you had asked for in the first place was a cup of water.

So what does it mean – this discovery (and it’s almost as much a discovery for me as for you) that there are a huge number of implicit requirements in the CIP standards? For one thing, it means that the goal of coming up with a definitive list of implicit requirements (as I mentioned that Lew had initially talked about doing, in the last post) is certainly well-intentioned, but nothing that can be accomplished in a human lifetime. Face it: There is always going to be a lot of uncertainty around how you comply with the prescriptive CIP requirements.

Uncertainty isn’t good when you’re talking about mandatory requirements with big potential fines. What’s the solution? In the near term, the solution is the same one as for all of the other problems with CIP: It will be handled one-on-one between the auditors and the Responsible Entities, as all the other problems have been handled. I wrote over 50 posts on the problems with CIP-002 R1 after CIP v5 was approved. None of these problems was ever resolved, yet the CIP compliance process today works fairly smoothly. This is because all parties – NERC, the Regions (especially the auditors) and the entities – have a big stake in having things run smoothly. They have always figured out how to do it in the past, and they’ll continue to do that. I call this the “Don’t Ask, Don’t Tell” NERC CIP compliance approach. But the original DADT worked fairly well also, until the climate had shifted enough that it wasn’t needed anymore. Maybe that will happen with CIP as well.

What about the longer term? Fortunately, there the outlook is brighter. This is because it seems there’s widespread recognition that the prescriptive requirements in CIP are a dead end. Specifically, while the current CIP Modifications drafting team doesn’t have fixing the prescriptive requirements as an item in their SAR, they do have virtualization. And they’ve realized that a comprehensive solution to the problem of integrating virtualization into CIP requires fixing the most prescriptive requirements.

I wrote about the SDT’s initial work on this problem in this post (and two more in the series) in June. They’ve recently come up with what looks like a great set of follow-on documents, including suggested revisions to the standards and new definitions. I have to spend some quality time with these, which I hope to do soon, but my initial impression is that they seem to have some really good ideas. But I do think the SDT is greatly underestimating how difficult it will be to get all of this passed by the NERC ballot body (and they may find it hard to get approved by FERC as well). However, once these changes are in place, they will go a long way to fixing the problem of overly prescriptive CIP requirements.

But it still isn’t all puppies and roses on this front. While prescriptive requirements are a big problem, there’s an even bigger problem: prescriptive auditing. This is a much harder problem to solve, and will be hanging over the effort to get the new virtualization requirements approved. While the problems with prescriptive requirements are also hard to solve, the means to solve them are clear: revising the CIP requirements and definitions. The same can’t be said for prescriptive auditing. In fact, at the moment I don’t know how that will be solved, although ultimately I think it will be. More on this topic soon.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.

[i] To find the article, go here and look for the Newsletters line near the bottom. Click on the + and then click on 2015. Click on the Nov-Dec newsletter. Go to the index on the left of the first page and click on The Lighthouse.

Thursday, November 29, 2018

Implicit Requirements: the Remix



In October of 2015, I wrote a post about an RF CIP workshop I had just attended. Of course, Lew Folkerth of RF spoke at the workshop, and I summarized his presentation this way (I’ve edited it a little, not because Lew used language unfit for a family blog like this one, but because I said a few things that don’t make complete sense. Since I’m allegedly three years wiser at this point, I can now see the problems in what I wrote):

“Lew pointed out that there are a number of “implicit requirements” in CIP v5. These are things that the entity needs to do to be in compliance, which are not specifically identified in the actual Requirements. Lew gave the example of the implicit requirement to “(identify) any associated Protected Cyber Assets” (this requirement never appears at all in CIP v5, but the entity obviously needs to identify PCAs, since many of the requirements apply to PCAs). RF isn’t just looking for a list of the PCAs, but wants to know how the entity verified that every Cyber Asset in the ESP had been identified as either a component of a BVS or a PCA.

“Another example is identification of Electronic Access Control or Monitoring Systems (EACMS) and Physical Access Control Systems (PACS). The entity is never explicitly required to identify these, but they obviously have to do so to be in compliance - and they need to be able to show that they haven't missed any EACMS or PACS systems when they did their identifications.

“Of course, you can’t find out about these implicit requirements by reading the RSAWs, since they only deal with the explicitly-stated requirements (2018 note: Tom was being fairly careless here, as he too often is. He should have said something like ‘The RSAWs are constrained not to require anything other than what is in the strict language of the requirement’). A questioner asked Lew if RF would publish a list of the implicit requirements. Lew said he’d look into doing that. I certainly hope he does – it is greatly needed (another 2018 note: Lew would have liked to publish that list, but ended up not being allowed to because – of course! – the ERO, meaning NERC and the Regions, isn’t allowed to provide anything that could be considered an “interpretation” of a NERC requirement. This is a very sorry situation, and has led to repeated false starts by NERC in trying to provide guidance on CIP. The CIP people at NERC would just love to be able to provide CIP guidance, since the need for it has been great all along and continues unabated. But it simply isn’t allowed by the NERC Rules of Procedure, and literally every attempt by NERC staff members to provide guidance on CIP has ended up being withdrawn).

Lew’s presentation was the first time I had really thought about the idea of “implicit requirements”, which is an excellent way to describe a lot of the ambiguities in the CIP v5 requirements (another might be “unstated assumptions”, but I like Lew’s term better because it emphasizes the need for the entity to document how they complied with these implicit requirements – since there is no way that an entity can properly comply with a CIP requirement itself without first complying with any implicit requirements it contains). In requirements developed since v5, this problem hasn’t repeated itself (at least not much). But since most of the v5 requirements are still in effect, we still have to deal with the problem.

I had kind of internalized the idea of implicit requirements and considered it part of the overall CIP landscape, as much as the non-implicit requirements – so I’ll admit I haven’t really thought about it much, and I never wrote a full post on the topic. But, as I mentioned in a post two weeks ago, I’ve recently been working with several entities that are fairly new to CIP compliance, and I realized I needed to initiate them into the dark secrets of implicit requirements – so that they would have a prayer of surviving in the cold world of NERC CIP audits. Here is how I put it to them (OK, I never actually said these exact words, but I consider myself as having said them. And that’s just as good as saying them, right?):

I have the greatest respect for the CIP v5 drafting team – after all, they started working in the fall of 2008 and didn’t finish their work until January of 2013, when they submitted CIP v5 to FERC (along the way, they’d also developed– and gotten approved - CIP versions 2, 3 and 4, along with a “version”  in early 2010 that was too visionary for its time, but ended up mostly being incorporated into v5. That version was known as CIP-010 and CIP-011, but these standards had nothing to do with the current CIP-010 and -011).

However, in retrospect, the team did take some big shortcuts in drafting CIP v5, mainly because they were under huge pressure to get something approved by the NERC ballot body and out the door to FERC in 2012. This pressure was due to FERC’s having put the squeeze on them by rather impulsively approving CIP version 4 in April of 2012 – as discussed in this post (I regard approving CIP v4 as the worst mistake FERC has made regarding the NERC CIP standards. I discussed this in three posts in 2013, starting with this one). The drafting team most likely felt – and perhaps rightly so – that if they didn’t get v5 approved and on FERC’s desk in 2012, FERC was going to let CIP v4 come into effect on its 4/1/14 implementation date. And this would be followed two or three years later by the implementation of CIP v5, meaning NERC entities would have to go through two huge CIP transitions in 2-3 years. So they probably had no good option other than to take some shortcuts.

One of the biggest shortcuts the team took – again, looking back from 2018 - is that they didn’t bother to write explicit requirements for all of the steps that were needed to comply with some of the requirements that they drafted (I certainly didn’t think of this at the time the SDT was developing v5, even though I attended some of the drafting team meetings and participated in a number of the conversations). The effect of this is that a single requirement can contain five or more implicit requirements. You need to comply with all of the implicit requirements in order to comply with the requirement itself – but you need to figure them out entirely on your own, since none of the implicit requirements are actually written down anywhere.

Probably the most egregious example of implicit requirements can be found by examining CIP-002-5.1a R1. The “operational” parts of the requirement are R1.1-R1.3, which all mandate that the entity identify something called BES Cyber Systems. What’s a BES Cyber System? You go to the NERC definition, which just says it’s a bunch of BES Cyber Assets grouped together for “reliability purposes”. And what’s a BES Cyber Asset? That’s a very long definition, but it starts off with the term Cyber Asset. And what’s a Cyber Asset? Well, it’s a “Programmable electronic device...”, according to the NERC definition.

And what’s that? The words “electronic device” certainly have an accepted meaning, but the word “programmable” doesn’t. This leads to a problem that I discussed in a number of posts in 2014 and 2015, starting with this one: that there are a few words that are vital to understanding the CIP requirements, which have no NERC definition and no generally accepted meaning. For each of these words, the only solution – which it took me a long time to understand, although once again Lew Folkerth was ahead of me – is to do your best and come up with a reasonable definition of your own, then follow that consistently as you comply with the CIP standards.

So our attempt to comply with R1 and “identify” BES Cyber Systems has actually led us to three implicit requirements:

R0.1. Develop a definition of “programmable”.
R0.2. Identify Cyber Assets.
R0.4. Identify BES Cyber Assets, among the Cyber Assets identified in R0.1 (you’ll see why I numbered this as I did in a moment).

So are we finished with the implicit requirements in R1? No. Let’s look at the first part of the definition of BES Cyber Asset: “A Cyber Asset that if rendered unavailable, degraded, or misused would, within 15 minutes of its required operation, misoperation, or non-operation, adversely impact one or more Facilities, systems, or equipment, which, if destroyed, degraded, or otherwise rendered unavailable when needed, would affect the reliable operation of the Bulk Electric System.” Do you understand exactly what that means, so you’ll have no problem identifying a BES Cyber Asset if it walks in your door?

You might, but my guess is you’ll end up doing what a lot of people in the industry did: getting hung up on the phrase “…impact one or more Facilities, systems, or equipment…”. Doesn’t that strike you as a little broad? After all, if I go out to a substation and hit a transformer with a hammer, I will have impacted a BES Facility (the transformer meets the NERC definition of Facility). Does that make my hammer a BES Cyber Asset? No it doesn’t, because a BCA has to be a Cyber Asset, meaning it’s programmable. OK, let’s say I take out my cell phone and lightly tap the transformer, leaving a very small dent in it. The phone is definitely a programmable electronic device, and the definition doesn’t say “big impact”, just “impact”. This makes my cell phone a BCA, right? After all, it impacted (dented) a BES Facility within 15 minutes. Of course, the drafting team was thinking of “impact” in the sense of “electrical impact” – but they didn’t include that in the definition. Hence the ambiguity.

So one big question as people were trying to figure out CIP v5 was the question of the meaning of “impact the BES”. They needed this before they could identify BCAs, and they needed to identify BCAs before they could group them into BCS. How did they answer this question? Well, I offered my own helpful opinion on that, but it didn’t exactly gain universal acceptance and shouts of acclamation (I still like it, and in fact I think it is the “definition” that most NERC entities have implicitly used in deciding what’s a BCA).

The fact is that, just as in the question on the meaning of programmable, there is still no answer from NERC on what “impact the BES” means. Both of these questions are officially on the plate of the current CIP Modifications drafting team, but that group isn’t going to directly answer either one of them – I’m 100% sure of that. And I support them 100% in this particular non-action, since there is simply no way to develop a real Webster’s-style definition of either term.

On the other hand, the drafting team will neatly deal with both issues, as part of their proposed changes to deal with virtualization. First, they’re going to neatly bypass altogether the need to keep using “programmable” by relegating the term Cyber Asset to a very unimportant role (essentially, the term will only be used as part of the definition of Transient Cyber Asset. My, how the great have fallen!). Second, they are retiring the term BES Cyber Asset altogether, and the new definition of BES Cyber System incorporates language that avoids the problem the phrase “impact the BES” – in fact, it in some way follows the “definition” I came up with in 2015, although I’m not in any way claiming credit for this. Pretty cool, huh?[i]

All of this means there’s another implicit requirement buried in CIP-002 R1:

R0.3. Develop a definition of “impact the BES”.

So we have, without breaking a sweat, come up with four implicit requirement buried in CIP-002 R1 – and we’re still not done! When we get to CIP-002 Attachment 1 (which of course is part of R1), there are a whole host of implicit requirements (I honestly guess there could be as many as 50 or 100, especially when you start looking at some of the bright-line criteria, which aren’t bright at all but are loaded with unstated assumptions – i.e. implicit requirements. Maybe I’ll take a month off sometime and try to enumerate them all, just out of sheer perversity).

CIP-002 R1 is probably the CIP requirement with the most implicit requirements buried in it. But CIP-005 R1 can certainly give it a run for its money, with maybe CIP-010 R1 easily winning “show”. I will leave those two as an exercise for the reader.

Let’s get back to reality (I enjoy doing that every now and then – breaks my routine). What does the fact that there are so many implicit requirements buried in the “actual” CIP requirements mean for a NERC entity? The main impact (so to speak) is that your RSAW compliance narrative for a particular requirement should go through all of the logical steps that are actually needed to comply with the requirement – if you do that, you’ll “discover” all of the implicit requirements, even though you probably won’t recognize them as such. For example, your CIP-002 R1 narrative could start with a list of the steps we went through above, although probably in reverse order.

And what will happen to you if you don’t do this? Will you get a violation? You definitely can’t get a violation, since implicit requirements aren’t stated in the real requirement. But your auditor isn’t just looking to nail you for violations, but to see if you really understood what you were doing when you complied with a requirement. So if you just say that you “identified” your BES Cyber Systems in your CIP-002 R1 narrative, your auditor might ask “And how did you do that?” If you answer, “I dunno, I thought this computer had a 15-minute BES impact – and besides, I like its metallic gray color”, you won’t get a Potential non-Compliance finding, but your auditor may well give you an Area of Concern, then tell you to think about how you should identify your BCS and re-do the RSAW narrative to include all the steps you need to take (implied or not). Then you’ll need to go back to make sure you actually identified all of your BCS properly.

And if you discover you actually missed one or two BCS? You would now have to comply for them, of course. But I don’t see any way you can get a violation for that, since – of course – implicit requirements aren’t stated in a requirement!


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.

[i] The SDT has released for comment a new set of revised standards and definitions that addresses virtualization, and I will discuss these questions more when I discuss that new release.  You can get a preview of what I’ll say from this post.

Tuesday, November 20, 2018

A Great Supply Chain Security Forum



In this post from early October, I mentioned (toward the end of the post) a conference I’d just attended outside of Washington DC, called the Software and Supply Chain Assurance Forum (SSCA for short). It’s a free quarterly two-day conference sponsored by NIST, the Department of Defense, the Department of Homeland Security and the Government Services Agency (GSA). It is organized by MITRE Corporation and is always held at MITRE’s offices in McLean, Virginia. I intended to write a post soon thereafter providing all the details you need to get on their mailing list, both in order to hear about future conferences and for general information about supply chain security.

Well, “soon” has proved to be a month and a half, as I got diverted by other things, especially FERC’s approval of CIP-013. But yesterday I received an email announcing their next forum December 18-19, and I decided I really need to write this post now, since some of you may want to attend it (which I can’t do this time, although I will try to attend as often as possible in the future. BTW, there’s no webcast or recording – you have to attend in person).

I asked Bob Martin of MITRE, one of the organizers, to provide a short description of the forum and its history. To be honest, I thought this was probably some effort that had just started a few years ago, when I first heard a lot of talk about supply chain security. As you’ll see, I was quite wrong on that. Face it: the Feds were doing supply chain security long before a lot of us even heard of it. Here’s what Bob said:

“The Software and Supply Chain Assurance started in 2004 as the Software Assurance Forum and Working Groups, a free, DoD-focused public engagement effort to bring together the community on the myriad issues surrounding Software Assurance. The following year it became a joint effort of DHS and DoD, with a continued focus on engagement with industry, academia, and government, and was re-established as part of the Cross-Sector Cyber Security Working Group (CSCSWG) under auspices of the Critical Infrastructure Partnership Advisory Council (CIPAC).

During the first decade, the Forum had an emphasis on working groups to accumulate and promulgate best practices on the Software Assurance aspects of Technology, Processes, Education, and Acquisition. In 2008, the co-sponsoring partnership expanded to include NIST. Then in 2014, GSA joined as the fourth co-sponsor and the Forum refocused to more directly include Supply Chain issues, including renaming itself the Software and Supply Chain Assurance (SSCA) Forum.”

I do want to point out that, although the SSCA clearly started out with almost all Federal government employees and contractors as members, it now includes a significant industry contingent. And the discussions aren’t usually on specific government topics, but supply chain security in general. Here are the topics for the December meeting: the DHS Task Force on supply chain security, software security, software testing, software identification, software certification, international supply chain risk management efforts, updates to US Federal Government policies, and more.

You can sign up for the December meeting here. If you’re a non-US citizen or green card holder, you need to sign up by 12/4. If you’re a US citizen or green card holder, you have until 12/11. And if you want to get on the mailing list, drop an email to Bob Martin at ramartin@mitre.org.

Here are some interesting things I learned from the September meeting. They’re just in the order of the presentations where they were stated. Note that, even when I use quotation marks, I can’t vouch that this is a verbatim transcription of what was said.  

NIST:
CSF R1.1 has a supply chain focus.
800-53 R5 has a map to the CSF, as well as to 800-161 (the supply chain security framework).
800-37 addresses Enterprise Risk Management.

The Open Group:
The O-TTPS cyber security standard is for suppliers. It’s available as a certification.

DoE:
The CITRICS program is testing ICS components for security.

FERC:
Rhonda Dunfee of FERC pointed out that CIP-010 R1.6 just deals with software security patches, not firmware (even though CIP-013 R1.2.5 can be interpreted as applying to firmware as well as software patches).

Unknown:
“Many organizations fear the auditor more than the attacker.” (how true!)
“Software quality defects are really security defects, although the converse is not necessarily true. Quality and security defects are usually dealt with in separate efforts, although it would be better if that weren’t true.”
“If you talk to a CEO about risk, they’ll assure you they have that covered. What they usually mean is financial risk. If you talk to them specifically about operational risk (including cyber), they’ll give you a blank stare.”

Edna Conway of Cisco (a well-known supply chain security expert, and fun to listen to):
“Quality and security aren’t achieved through contract language….If you are dealing with your vendor in a courtroom, you’ve failed.” (how true that is! For my first post on this idea – there will be further posts, I can assure you – go here. There is a follow-up to that post here)

Office of the Director of National Intelligence (this was the best presentation, IMHO):
·         “The hard part in supply chain security is getting all the right people together, not the actual measures.”
·         A good tabletop exercise: What are factors that would make you say no or yes to a new business relationship?
·         “When it comes to SCS, there is no ‘easy’ button: ‘Just tell me what I need to do.’”
·         The Kaspersky threat has been known for a long time. The ODNI talked with supply chain people and warned them of this. But at the time they didn’t have evidence of malfeasance, so many government organizations (and private ones) signed up with Kaspersky anyway.
·         You can’t trade off advantages in cost or schedule for security performance.
·         SCS can’t be a compliance checklist approach; there must be risk mgmt. (how true this is! This is the point of the multiple posts – such as this one - that I’ve written on why CIP-013 R1.1 is the heart of the standard, while R1.2 is just a sideshow, although a mandatory one)
·         4 pillars of supply chain: cost, schedule, performance and security
·         “Security should be like quality. It should be expected all of the time.”
·         “It is not at all inevitable that added security increases cost.”

Howard Gugel, NERC (who spoke on CIP-013):
(in response to a question of whether CIP-013 addresses security risks from custom-developed software) “Those risks are covered in other CIP requirements, notably CIP-010 R3, the Vulnerability Assessment requirement.” I disagree with this, since having an every-15-month VA isn’t the same as having regular patches from a vendor, as new vulnerabilities are identified. This is one of the many risks that should be addressed – for those entities to which it’s applicable – in R1.1. But I haven’t heard NERC say much at all about R1.1 – the focus in everything I’ve heard has been R1.2, and on using contract language to comply with R1.2. IMHO, both of these emphases miss the whole point of the standard.

NCCIC (DHS):
Someone from NCCIC discussed the Russia attacks (about which I wrote a whole bunch of posts this summer, starting with this one), and pointed out that most of the entities compromised were small. This is all well and good, but that certainly wasn’t the tenor of the presentations they gave, even though perhaps they never explicitly said otherwise. It sounded like major utilities had been compromised, and it was just going to take one nod from Vladimir Putin to send the whole US into darkness. That was certainly the tenor of the major news articles, and I haven’t heard of DHS issuing any public (or private) apology for misleading the press on this matter.

This person also seemed to make yet another attempt to try to make sense of those contradictory briefings –as well as what was put out on the subject early this year – by saying that DHS just didn’t understand the difference between “vendors” and “utilities”. I believe we’re now on at least Version 5 of the ongoing saga “What DHS really meant in those briefings”. Unfortunately, this latest explanation is no more consistent with the actual statements than are any of the previous ones. I hope to do a post, summarizing what I understand of this sorry tale, in December.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. We also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.

Thursday, November 15, 2018

Stop Making Sense: the Remix



I have been working this year with two organizations that are getting ready for CIP compliance at the Medium impact level, as well as a Canadian organization with Highs, Mediums and Lows, that only had to comply with CIP starting in 2017. These discussions have brought to my mind a number of issues that I addressed in posts and discussed with various entities, as the whole industry was getting ready for CIP v5 compliance in 2014-2016.

A very small number of these issues were actually resolved by revising the standards (although I really can’t think of any issue that was completely resolved in this way. Since changing one of the CIP standards is easily a 3-5 year process from initial idea to having an enforceable standard, it’s not surprising that this happens very rarely). A number of these issues (especially the various problems with CIP-002 R1 and Attachment 1, about which I wrote at least 50 posts, starting with this one) are no longer burning issues, since the auditors and the NERC entities came to a rough consensus on how to resolve them (even though it’s not supported in the requirements as written).

However, there are some issues (like the cloud) on which there hasn’t been any resolution of any sort. The only reason why you don’t hear complaints every day about these issues (from me as well as from others involved with CIP compliance) is that it’s quite clear that nothing can be done about them in the near term, except to do your best to work around them. In fact, I’m sure a lot of CIP compliance people have probably forgotten that these even were issues in the first place. They’re simply part of the landscape, like a mountain. If you’re going from point A to point B and there’s a mountain in between, you’re not going to spend a lot of time complaining about the fact that the mountain is there – you’ll just allow for that and make a big detour when you come near it. But the cost of having to make that detour is very real.

In my discussions with these three entities, they of course have questions on what particular requirements or definitions mean, and naturally try their best to make sense of them. For example, they might say “Surely CIP-007 R2 doesn’t require that we have to check every 35 days, for every piece of software installed within the ESP, to find out if the vendor has released a new security patch! What if the vendor has never released a security patch in ten years? Or what if the software involved is fairly insignificant and has minimal impact on the BES? It only makes sense that we should be able to devote our resources and time to what poses the most risk to the BES.”

To which I will (sagely) reply “I totally agree it makes sense that you should be able to look at risk in how you comply with this requirement (and all of the other CIP requirements as well). But what makes sense has nothing to do with CIP compliance. The only thing that matters is the strict wording of the requirement. If something is allowed by the wording, that’s what you do. If something isn’t allowed, you don’t do it. And if the wording is ambiguous (which it is very often, of course), well…that’s for another discussion.”

I wrote a post about exactly this issue in 2016. I just reread it, and it’s just as relevant now as it was then (in fact, the question I discussed in the post was about the cloud. I agreed then that it would absolutely make sense if NERC entities were free to entrust their BCSI, and their BCS, too, to cloud providers that had demonstrably good cyber security practices – but of course this just wasn’t allowed by CIP. If I’m not mistaken, this is as much an open issue today as it was in 2016, in fact much more so).

However, I’m pleased to say that there’s been some slow progress on addressing this issue. I pointed out in the post that the reason why NERC entities can’t apply reason very often in CIP compliance is that most of the CIP requirements are prescriptive. This is still the case, but it’s also a fact that every significant new CIP requirement developed since CIP v5 has been what I call plan-based, and what others call objective-based: That is, the requirement mandates that the entity develop a plan to achieve a particular objective. The means of achieving it are up to the entity, as long as what you propose in your plan is – are you ready for this? – reasonable. Isn’t that amazing? Not only can you use reason in developing your plan – that is the criterion by which the plan will be judged!

These requirements take different forms, but fortunately they all come down to developing a plan to achieve an objective. The current plan-based requirements are (in order of their development):

CIP-007 R3, Anti-Malware: This was developed with CIP v5. It doesn’t specifically require a plan, but it does definitely state an objective: mitigating the risk posed by malware. You have to go through a planning process to achieve that, since you need to do this for each BES Cyber System, EACMS, PACS and PCA, even though the appropriate means of risk mitigation may be different for different devices.

CIP-011 R1, Information Protection: This was also developed with v5. It requires you to develop an information protection program (same as a plan, as far as I’m concerned) for protecting BES Cyber System Information in storage, transit, and use.

CIP-010 R4, Transient Cyber Assets and Removable Media: This was developed with v6.  The entity must develop a plan to mitigate the risks caused by use of TCAs and RM within the ESP.

CIP-003 R2, Assets containing Low impact BCS: In the v7 version due for compliance on 1/1/20, the entity needs to develop a plan for protecting assets containing Low impact BCS in five different areas.

In addition, there’s CIP-013, which is due for compliance on 7/1/20.  It is the first CIP standard (in fact, probably the first NERC standard, period) that explicitly allows consideration of risk. It does this because FERC knew, when they wrote Order 829 in 2016, that there is no way for a utility to cost-effectively mitigate supply chain security risks if they don’t develop a plan that is based on risk.[i] CIP-012, currently being balloted, is also plan-based.

And this isn’t the end. The CIP Modifications Standards Drafting Team is working on (and currently has out for comment) revised CIP standards and definitions to officially permit CIP compliance for virtualized systems (as opposed to NERC’s current “Don’t ask, don’t tell” policy for virtualization, as well as any other area where the current standards are either ambiguous or just don’t reflect reality).

One key component of this effort is rewriting the most prescriptive requirements, like CIP-007 R2 and CIP-010 R1, as objectives-based. While this is being done specifically to address virtualization, I am sure all NERC entities will welcome this development, regardless of whether they have virtualized systems in their OT environment or not. That's the good news. What's the bad news? The changes the SDT is talking about now are years away from coming into effect. In the mean time, everyone still has the prescriptive requirements they love so much.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post, including compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss my services, please email me at the same address so we can set up a time to talk.



[i] And I would take this one step further: There’s no way that a utility can cost-effectively mitigate any security risks except by developing a plan that is based on risk. So what about all the risks addressed by the current prescriptive CIP requirements? They are being effectively mitigated now, but at a cost that is much higher than is needed, simply because the prescriptive requirements don’t allow for any consideration of risk. More importantly, there are a number of major risks – phishing, ransomware, APTs, etc. – that aren’t addressed by CIP at all now, and there’s no appetite on FERC or NERC’s part to go through the long and painful standards development process to incorporate these risks into CIP. If all of the current prescriptive CIP requirements were replaced by a single requirement that mandated that the utility develop a plan to achieve the objectives of mitigating each of the risks currently addressed by the prescriptive requirements, as well as other risks not addressed at all now, then the utility could decide how to allocate its resources, based on risk, so that the maximum amount of total cyber risk was mitigated, given the dollars available to spend. In my opinion, this would be the ultimate solution to CIP’s problems.

Friday, November 2, 2018

Hoover Dam









I must admit that Las Vegas isn’t my favorite city. In fact, I don’t think it makes my top 100 or even top 500. That’s why my heart sank when I heard last year that this year’s GridSecCon would be there. I was especially disappointed because all the other GridSecCon’s that I’ve attended have been in (or near) old historic cities with nice architecture to look at: St. Paul, MN last year, Quebec City in 2016, Philadelphia in 2015, San Antonio in 2014 and Jacksonville, FL in 2013 (OK, Jacksonville itself doesn’t count as old and historic, but it’s not far from St. Augustine, which is definitely both!).

However, there was one really good thing about Las Vegas: It’s very near Hoover Dam, which I’d never seen. And since there was a conference tour scheduled for Friday (there are always tours on Friday morning of the conference, to power plants, substations, etc.), I signed up for that early.

The great thing about this tour was that it was for power industry insiders and was set up by WECC, who co-ran the conference and of course audits the US Bureau of Reclamation, which runs the dam. This proved very fortunate in three ways:

  1.  We got a good briefing on the dam before our tour, and during the tour we had two longtime workers who encouraged us to ask “any dam question you want”. We heard a lot of interesting stories about construction, etc., including the poignant fact that, of the 112 deaths associated with construction, the first and the last were a father and his son
  2. You have to take an elevator down into the dam for the tour. The first elevators take you to the level where all the generators are visible and we could walk among them – quite neat, of course. But there’s a second, smaller set of elevators that go down to the waterline lever, where you can walk out along the base of the dam. And that was very cool, indeed. This second level isn’t available for public tours.
  3. There are two elevators from the top of the dam. One is for the public tours, and has an hour wait (it looked like that when we were there). The other is for the workers, which we went on – thus saving a whole hour!

It’s obviously quite an engineering marvel (I love the figure we were quoted regarding throughput: 92,000 gallons per second go through each half of the dam – there are identical halves, one in Nevada the other in Arizona, since the state line runs down the middle of the river – meaning the total is about 184,000 gallons per second!).

But what I found most impressive was how the people we talked to really believed in what the dam was meant to do. Of course, irrigation and flood control were its biggest purpose, but the second was bringing electricity to millions of people, many of whom hadn’t had it before. And a third purpose is seen in its construction period: 1931-1936, the depths of the Depression. It gave jobs to many thousands of people (almost all men, I’m sure. And virtually no minorities. There were limits to the idea of progressivism back then). In an era when government is often seen as at best ineffective and at worst destructive, this is a government project that can truly be said to have done a huge amount of good. And I suspect there are one or two more like it…

I want to thank Brandy Daniels of WECC, who organized the tour and provided the group picture.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.

Monday, October 22, 2018

A good news article on CIP-013




I have pointed out many times in this blog that by far the best coverage of cyber security issues in the energy industry is that provided by the web-based Energy and Environment News. Unfortunately, the subscription cost for that is out of the range of us non-one-percenters, but you should look into having your organization subscribe (not just for cyber, but all energy news).


So it’s not surprising that E&E News provided the only coverage that I have seen so far of FERC’s approval of CIP-013 last week. I recommend you read this article (if you are asked to do a free trial subscription, I highly recommend it, since they have good articles every day, with cyber articles about the power industry usually at least a couple times a week. At least you’ll be able to read all the articles while your trial lasts).

In my last post (on Thursday, the day FERC approved CIP-013) I provided a brief summary of FERC’s Order 850 and said I’d have more to say after I had time to read it carefully over the weekend. Well, guess what? What I have to say now isn’t terribly different from what I said in my brief summary:

  1. FERC left the implementation period at 18 months, rather than order it be shortened to 12 months, as they had suggested in their NOPR in January.
  2. As they also suggested in January, they ordered that NERC add Medium and High impact Electronic Access Control and Monitoring Systems to the applicability of the standard, beyond the current Medium and High impact BES Cyber Systems. They are giving NERC 24 months to do this, which is twice as long as they gave NERC to develop the entire standard in the first place (however, when I say “They”, I need to point out that there is only one Commissioner still in office from when FERC issued Order 829 in July 2016. And that Commissioner – Cheryl LaFleur – dissented from the Order, since she very rightly didn’t believe that one year was enough time for NERC to do a good job of drafting the standard and going through the long and politically fraught approval process).
  3. Regarding the other items that FERC suggested in their NOPR should be considered for applicability in CIP-013 – Physical Access Control Systems, Protected Cyber Assets and Low impact BES Cyber Systems – FERC said on Thursday that they will hold any decision until they see the final version of NERC’s supply chain security study, which is due early next year.
  4. To be honest, the most interesting part of Order 850 was at the end, in paragraphs 78 and 79. Paragraph 78 discussed comments FERC received about the meaning of “vendor”; FERC’s answer mistakenly said that NERC had defined the term. Actually, the story behind that is a lot more nuanced, and points to a larger problem with the NERC CIP standards in general. Since I specialize in Larger Problems in this blog, I will dig into this later, although I have a number of other posts in the queue before that can happen.
  5. Paragraph 79 was sparked by comments suggesting that NERC entities would be on the hook for deficiencies in cyber security practices by their vendors. FERC correctly pointed out that the standard specifically states that entities won’t be on the hook, although they are still responsible for mitigating the risk that the vendor presumably didn’t mitigate. For example, if your vendor agrees (in contract language or just in a letter) that they will help you mitigate new vulnerabilities that affect their products and then doesn’t do that in one case, you still have to take other steps to mitigate the risk caused by that new vulnerability.
But, through work I am currently doing with a vendor to the power industry, I’ve come to see that there’s a very neat, elegant solution to the problem of obligating vendors to take certain steps - and penalizing them if they don’t. What is really cool, though, is that this is also a solution to the vendors’ problem (which I mentioned in the E&E News article) that some NERC entities are downloading contract language from various sources on the internet and sending it to the vendors, demanding that they include that in their contracts. Of course, there’s no way the vendor can deal with this big variety of requests. My solution will also solve that problem. Not bad – two big problems nailed with one solution! Next up: a neat single solution for the two big problems of global poverty and people who talk loudly on their cell phone on trains.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013; we also work with security product or service vendors that need help articulating their message to the power industry. To discuss this, you can email me at the same address.