Wednesday, September 30, 2020

Another good example of why everybody needs SBOMs


I recently wrote a post that gave a good example of how software bills of materials can make your control systems (and other systems, of course) more secure by allowing you to learn of vulnerabilities that apply to components embedded in the software you use. Because the developer that wrote the software you’re running might intentionally or unintentionally not inform you of the vulnerability in one of their components, having an SBOM will allow you to proactively reach out to the supplier and – very politely, of course – ask them when they will be patching this vulnerability, or otherwise providing a mitigation for it.

A few days after I wrote that post, I saw in the weekly newsletter of Protect our Power (which BTW provides a great list of recent articles and posts of interest to people involved or concerned with protecting the grid against cyberattacks), a link to this article, which describes a set of vulnerabilities that have been recently identified in CodeMeter, a software component sold by Wilbu Systems. The article says the component is “licensed by many of the top industrial control system (ICS) software vendors, including Rockwell Automation and Siemens. CodeMeter gives these companies tools to bolster security, help with licensing models, and protect against piracy or reverse-engineering.” At least one of the vulnerabilities has a CVSS v3 score of ten (out of ten), which is the critical level.

What most caught my eye in this article were these two paragraphs:

According to ICS-CERT, Wibu-Systems recommends that users update to the latest version of the CodeMeter Runtime (version 7.10). Affected vendors like Rockwell and Siemens have released their own security advisories, but researchers warn that, due to CodeMeter being integrated into many leading ICS products, users may be unaware this vulnerable third-party component is running in their environment.

“CodeMeter is a widely deployed third-party tool that is integrated into numerous products; organizations may not be aware their product has CodeMeter embedded, for example, or may not have a readily available update mechanism,” warned researchers.

In other words, you need to check with your ICS (OT) software (or perhaps hardware) supplier to a) find out if CodeMeter is including in their product, and if so b) ask what they’re going to do to fix this problem. But if you had a software bill of materials for each piece of software in your environment, you probably wouldn’t need to check with the suppliers. Except, of course, if you saw on the SBOM that the component is included in one of your products. Then you still need to do b).

This is just another reason to start asking for SBOMs from all of your software suppliers. Although my guess is in 2-3 years you won’t have to ask – you’ll receive an SBOM with the software. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, September 28, 2020

What could possibly go wrong?

 If you're looking for my pandemic posts, go here.

Two weeks ago, I put up a post that expressed pessimism that Medium and High impact BES Cyber Systems will be allowed in the cloud anytime soon, since doing so will require a complete rewrite of the NERC CIP standards.

Last week, I wrote about the often-repeated hope that somehow, someway FedRAMP compliance on the part of a cloud provider would be taken as evidence of the provider’s compliance with some or all of the standards, thus saving the NERC community the inconvenience of redrafting them from the start. I agreed that FedRAMP might indeed end up being taken as evidence of compliance with the CIP standards – just not the ones that are in place now.

None of the Measures mention anything about FedRAMP or the cloud, so at a minimum the Measures section for most requirements would have to be rewritten to say FedRAMP certification is evidence of compliance. But if that were done – changing the Measures without touching the requirements themselves – every NERC entity with Medium or High impact BCS would immediately do their best to move their entire OT infrastructure into the cloud, since then their required CIP compliance evidence  would effectively be reduced to one sentence: “We have transferred all of our BES Cyber Systems to XYZ Cloud Provider, who is FedRAMP compliant.” A little too good to be true, no?

The CIP requirements are going to have to be rewritten so they’re all risk based. Then all of the standards would essentially be like CIP-013: You need to identify and assess the risks in a particular area of risk (unpatched software vulnerabilities, malicious insiders, vendor remote access, etc), then mitigate them. Even if your BCS were all in the cloud, you would still have to show how you identified and assessed the risks, like all entities would. However, your evidence that most risks had been mitigated would be the fact that the cloud provider was FedRAMP compliant, and specifically that they had “passed” certification for each of those areas of risk (and if it turns out that FedRAMP doesn’t ask questions that specifically address the risks you’ve identified, you would need to provide other evidence from the cloud provider or from your network, showing that the risk had been mitigated).

So if this idea were to move forward, the drafting team would first need to redraft all of the CIP standards along something like the lines I’ve just described. Will that be enough? In other words, if we were to turn all of the current CIP requirements into risk-based ones, and if FedRAMP were to be accepted as evidence of compliance with most or all of them, would that be all it takes for NERC to say that BCS can safely be placed in the cloud?

I contend the answer to that is no. Why do I say this (other than the fact that I obviously enjoy being contrarian)? Because there are a number of risks that arise only from the cloud, that we’re just beginning to learn about – sometimes the hard way. These risks of course aren’t addressed in CIP, and they just as certainly aren’t addressed in FedRAMP either. The drafting team is going to have to take a long, hard look at what these risks are and how they can be mitigated. After they have done that, the team – and the NERC ballot body, the NERC Board of Trustees, and FERC – will also need to be satisfied that the risks they’ve identified, that are unique to the cloud, can be mitigated in some way by the cloud providers, by the NERC entities, or both. If they can’t identify a way to mitigate some of these new risks, and if they think these are significant risks, then they shouldn’t allow BCS to be implemented in the cloud.

What are examples of these risks that apply only to cloud providers? I’ve written about two of them, and I’m sure there are others. Below is my discussion of the first of these. I’ll discuss the second in my (hopefully) next post.

The Paige Thompson Memorial Risk

First and foremost, there’s the risk that was exposed by Paige Thompson, the former AWS employee who engineered the Capitol One breach. Since I think there’s a lot of misunderstanding about what the real risk is in this case, here are the most important points I made in my two main posts on this topic last year (here and here):

1.      Paige Thompson was a technical person who had been fired by AWS three years before this breach was discovered.

2.      She didn’t just breach Capital One’s systems in the AWS cloud; she bragged online that she had breached 30 other AWS customers’ systems. It doesn’t seem she took much if anything from those customers – she seems to have been motivated mainly by a desire for revenge on AWS for firing her (and even though she stole a lot of data from Capital One, she didn’t seem to try to monetize it by selling it on the dark web).

3.      Of course, AWS and other cloud providers deliberately leave security up to each customer - although I’m sure they’ll take responsibility for it if the customer is willing to pay something extra. So technically, the Capital One breach is Capital One’s fault, and the breaches of the other 30 customers are their fault. In fact, AWS initially blamed it on C1 (and C1 accepted that blame, which might have given one or two of their lawyers some apoplexy).

4.      However, Ms. Thompson had also bragged online that her success in penetrating at least 31 AWS customers was due to one specific reason: The customers don’t understand that there’s a big difference between configuring a firewall to protect an on-premises network and configuring a firewall in the AWS cloud. Specifically, the organization needs to understand AWS’ “metadata” service, and she bragged that “everybody gets this wrong”; she said this was how she was able to penetrate so many organizations’ AWS environments.

5.      But as I said in one of the above-linked posts, “If any system is so difficult to figure out that 30 companies don’t get it right (plus God knows how many other Amazon customers who weren’t lucky enough to be on Ms. Thompson’s target list), it seems to me (rude unlettered blogger that I am) that Amazon might look for a common cause for all of these errors, beyond pure stupidity on the customers’ part.” Either that or no longer accept customers who don’t reach a certain IQ level.

So the risk that a disgruntled former cloud provider employee will use their knowledge to break into customer environments in the cloud is a real one (and remember, this is very different from an insider threat. Three years after she was fired, Paige Thompson certainly wasn’t an insider anymore); but the threat that customers will be too stupid to configure their firewalls correctly isn’t a real risk.

The Paige Thompson risk obviously isn’t addressed in FedRAMP, since AWS was FedRAMP certified during the breach (although Ms. Thompson shouldn’t have been able to gain access to the AWS cloud – especially as an administrator – at all; the fact that she did perhaps points to some more conventional vulnerability that I presume AWS has identified and fixed by now).

How can this risk be mitigated? I don’t see any good way that a cloud provider could prevent a former employee from using that knowledge to accomplish this goal, other than somehow sucking that knowledge out of their brain when they leave (or giving the term “employee termination” a whole new meaning, as I very helpfully suggested in one of my posts last summer. However, I don’t believe that terminating your employees in this sense is an HR best practice). And obviously, no NDA is going to prevent a former employee from breaking into their former employer’s cloud and using the knowledge they gained, while working there, to hack into their customers’ environments. If they get caught as Paige did, an NDA violation is going to be the least of their problems.

The burden here is really on the cloud provider. If some service that’s essential for security is so complicated that nobody configures it properly, then a) the service needs to be reconfigured so it is understandable; b) the provider needs to provide intensive mandatory training on that service for all customers; or c) for customers who still don’t understand the service after training, the provider should take responsibility for making sure their environment is properly secured.

I think the new “cloud CIP” should require NERC entities to mitigate the risk that a former employee will use their technical knowledge of the cloud infrastructure to attack BCS that the entity has implemented in the cloud, on top of requiring FedRAMP compliance. I’ll discuss the other serious cloud risk that was discovered last year (also not addressed by FedRAMP, I’m sure) in my next post.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you wondering if you’ve forgotten something for the 10/1 deadline for CIP-013 compliance? This post describes three important tasks you need to make sure you address. I’ll be glad to discuss this with you as well – just email me and we’ll set up a time to talk.

Friday, September 25, 2020

Some sobering statistics about open source software components

 If you're looking for my pandemic posts, go here.

The software development tool company Sonatype publishes an annual “State of the Software Supply Chain” report, which you can download here. This year’s report had some interesting statistics about software components – i.e. snippets of either open source or third-party software that a supplier includes in their own code, including:

1.      90% of software components are open source, as opposed to being from commercial developers. I didn’t realize this percentage was so high.

2.      11% of open source software components included in software packages include at least one vulnerability.

3.      There are an average of 135 components in every piece of software. This was a good deal more than the figure from 2017 that I included in this post: 73.

4.      Some software products include between 2,000 and 4,000 open source components.

5.      A 2020 DevSecOps Community survey of 5,000 developers found that 21% had experienced an open source component-related breach in the past 12 months. This is down from 31% in 2018, but it’s obviously still very high.

The takeaway from this is that open source components are an important source of risk in software, including the software running on your BES Cyber Systems. What can you do about it? Can you simply not purchase any software from a supplier that includes open source components? I’m sure there are a few of those, but you’re suddenly going to greatly restrict the options available to you.

Obviously, if the average piece of software has 135 components and 90% of them (i.e. 121) are open source, you’re going to find few software packages to buy that have zero open source components. Using components in general saves a developer a huge amount of time – since otherwise they’d have to re-invent the wheel by writing those components themselves. And of course open source components save developers a lot of money vs. commercially available components (although my guess is there’s not a lot of overlap between the two. If you want specific functionality, you usually won’t be able to choose one or the other – it will be available in only one format or the other).

So if you can’t avoid this problem altogether, what can you do? You can do your best to make sure the supplier takes steps to mitigate risks due to open source components. I have identified ten risks that arise due to third party open source components in your supplier’s software. If you agree with me that these are serious risks, you need to ask your suppliers about this type of risk in your questionnaires. For example, a question based on item 2 is “Upon downloading an open source component, do you scan it for vulnerabilities before incorporating it in your product? And if you find a vulnerability, do you patch or otherwise mitigate it before you incorporate the component?”

And here’s another question: “Will you furnish a software bill of materials for your product, and will you update it as the product is revised?” 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you wondering if you’ve forgotten something for the 10/1 deadline for CIP-013 compliance? This post describes three important tasks you need to make sure you address. I’ll be glad to discuss this with you as well – just email me and we’ll set up a time to talk.

Wednesday, September 23, 2020

A powerful example of why we need SBOMs

 If you're looking for my pandemic posts, go here.

I’ve written several posts recently on the idea of software bills of materials. I titled those posts “Is software bill of materials the answer to all of our problems?” I must confess that this was a bit of an exaggeration. For one, I don’t think SBOMs will solve the problems of climate change or inequality, let alone the problem of teenagers who won’t listen to their parents. And they won’t even solve all of our cybersecurity problems. But what I find most exciting about them is that they offer the key to identifying a whole new class of software vulnerabilities which have always been present but have seldom been visible. Here is a great real-world example.

The example was discussed in an excellent paper (one of a number of excellent papers) produced by the Software Component Transparency initiative of the National Telecommunications and Information Administration (NTIA) of the Department of Commerce – a group, led by Dr. Allan Friedman, that is attempting to hasten the day when SBOMs will be widely available and widely used. The paper is called “Roles and Benefits for SBOM across the supply chain”.

On page 25 of that paper, the authors (members of the initiative from various industries and government agencies) point to the “Urgent/11” vulnerabilities in safety critical systems, which were brought to light in the summer of 2019. To quote the paper, “The initial vulnerability finders identified that the flaws were present in certain versions of Wind River’s VxWorks RTOS (Real Time Operating System). In response, Wind River provided thorough impact analysis of their affected versions of VxWorks.” However, it turns out that wasn’t enough, since “It was later revealed that the actual vulnerabilities were from a supplier to Wind River - named IPnet (whose creator, Swedish company Interpeak, was later acquired by Wind River).”

In other words, while VxWorks was definitely vulnerable, the source of the vulnerability was a component that Wind River had purchased and included in VxWorks. Moreover, that component, IPnet, was also included in a number of software packages from companies including ENEA, Green Hills, Microsoft, Mentor, TRON Forum, and IP Infusion. Those other products were also potentially infected, but of course their users didn’t know that.

How do SBOMs fit into this story? Neither VxWorks, nor any of the components from the other six companies (which are presumably competing RTOS’s), is ever purchased directly by an end user; the user acquires them when they acquire certain “safety critical” device (one example being an infusion pump used in hospitals). When Urgent/11 was believed to be only a VxWorks vulnerability, Wind River notified their customers (the device makers) and provided a patch to them.

But did those device makers notify their users of the vulnerability and provide the patch to them? The paper doesn’t say, but I point you to a quote from a 2017 Veracode study, that I included in the post linked at the top of this post: “A mere 52 percent of companies reported they provide security fixes to components when new security vulnerabilities are discovered.” So if you were a user of one of the devices that included VxWorks, and if those device makers followed the normal pattern, your chances were only 50/50 that you would have been told your device was vulnerable.

But let’s say you had insisted on being provided an SBOM when you purchased the product (and the Food and Drug Administration, which must approve medical devices before they can be sold in the US, announced in 2018 that they will require that an SBOM be provided before they will approve any new device). When you heard about the VxWorks vulnerability, and knowing that it was found in many types of devices, you would have scanned the SBOM and realized VxWorks was indeed included in your device, so you were potentially vulnerable.

At that point, you would have contacted the device’s manufacturer and asked when you would have the patch for the Wind River vulnerability. You would have been told one of two things:

1.      “Fortunately, even though we use VxWorks in our product, we don’t use the module that contains the vulnerability. So you don’t have to worry.” In many cases, a vulnerability in an “upstream” component doesn’t automatically apply to the “downstream” devices or software packages that include that component. For example, when the upstream component is a library of software modules (which was probably the case with VxWorks), not all of those are always incorporated into each downstream device, meaning the device isn’t in fact vulnerable. Or you would have been told:

2.      “Oh yes, we recently received that patch. As soon as we complete testing, we will get it out to you.” At that point, you shouldn’t have hung up. You should have asked “And when will you complete the testing?” You might have heard a reply like “Um, let me see…Yes that’s scheduled to be finished next week, so you should have the patch early the following week.” And if you didn’t hang up then, you might have heard the person call out “Joe, you know that VxWorks patch we got last month? People are starting to ask about it. You have to stop working on our super duper next version, then test the patch and ship it out. I just promised it for two weeks from today…Yes, I know you had a vacation scheduled next week. Let me talk to your wife, I’ll explain this to her...”

So this is one important reason you should start asking for SBOMs: If your Supplier is like half of its peers, you won’t be told about a vulnerability that might affect your systems, even though the supplier should know about it (and there’s no excuse for a software or device supplier not knowing about a vulnerability in a component of their software, unless they don’t even know what components are in their software – meaning there’s no way they can give you an SBOM. Hint: If a supplier can’t even list for you the third-party or open source components of their own software, you should find another supplier).

However, there’s another twist to this example, which points to another need. Once IPnet was determined to be the source of the vulnerability, the developer of that component presumably contacted all of their customers – including the six companies listed above – and sent them the patch. Presumably, those customers sent the patch on to their customers, the device makers. And we’ve already seen that, on average, there’s only a 50/50 chance the device makers would have passed the patch on to their end users (and of course provided support so they could install it correctly).

But what if you had asked for and received an SBOM from the maker of your device, and you read about the vulnerability in IPnet? Would you have known this vulnerability applied to your device and called the device maker about the patch (assuming they hadn’t yet sent it to you)? Not necessarily. Only if you had a “2-level” SBOM would you know there was IPnet in your device. That is, only if you knew about the components of the components in the device’s software would you have called your supplier and shamed them into sending you the patch.

Does this mean you should always ask for a two-level SBOM, not just a one-level one – in other words, a list of the components of the components of the software or device that you purchased and installed? You could ask if the supplier has one, but it’s not too likely they do today, simply because the use of SBOMs is still in its infancy. But the end goal is to have every SBOM include an SBOM for each of its components as well. In fact, it wouldn’t be bad if it went another level or two beyond that, so you would know all of the components of the components of the components of the components of the software you have purchased and deployed.

Is it likely that 4- or 5-level SBOMs will be available in the near future? Definitely not. It would be a huge step if every supplier just provided one-level SBOMs, as well as updated them whenever something changed in their software, let alone more than that. But think about it: If all suppliers (including open source communities) provided complete one-level SBOMs (meaning every component was listed), then almost all SBOMs could easily go down many layers. The reason I say this is that suppliers of components will ex hypothesi all provide SBOMs, meaning the only component SBOMs that wouldn’t be available to your supplier would be those for suppliers that have gone out of business, leaving their products unsupported.[i]

But let’s forget about universality at the moment. When will even a majority of software suppliers provide SBOMs? I don’t know when, but I will point out that Gartner, in a report titled Technology Insight for Software Composition Analysis,  said “By 2024, the provision of a detailed, regularly updated software bill of materials by software vendors will be a non-negotiable requirement for at least half of enterprise software buyers, up from less than 5% in 2019.” In other words, when customers start requiring SBOMs, the suppliers will provide them.

So now you know what you need to do.

 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you wondering if you’ve forgotten something for the 10/1 deadline for CIP-013 compliance? This post describes three important tasks you need to make sure you address. I’ll be glad to discuss this with you as well – just email me and we’ll set up a time to talk.


[i] At the moment, I’m leaving out the fact that it’s not usually necessary to have the supplier of a software package develop an SBOM. I’m told it’s not hard (although it would be for me!) to generate an SBOM for a software package just from its installation files – in fact, there are a number of organizations that are doing that now. However, the SBOM from the supplier is preferred since they have information on the component’s supplier and their contact information. These probably won’t be included in the installation files.

Tuesday, September 22, 2020

The SCWG webinars are (finally) up!

 If you're looking for my pandemic posts, go here.

You may recall that in March, April and early May, the NERC RSTC Supply Chain Working Group put on a series of webinars based on the supply chain security guidelines we had developed (mostly) in 2019. About four of the recordings were put up right away, but because of technical problems the rest were just put up last week. You can find them all here, along with the guidelines documents themselves, and the webinar slides.

I thought all of the webinars were good, but here are my two favorites:

·        “Risk Considerations for Open Source Software”, by George Masters of SEL.

·        “Vendor Incident Response”, by Steven Briggs of TVA

And I would be remiss (my boss would never forgive me!) if I didn’t mention the two webinars I did. I don’t consider them the best, but there were decent. Note that my ideas have evolved on both topics, although not drastically:

·        “Cyber Security Risk Management Lifecycle”

·        “Vendor Risk Management Lifecycle”

 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you wondering if you’ve forgotten something for the 10/1 deadline for CIP-013 compliance? This post describes three important tasks you need to make sure you address. I’ll be glad to discuss this with you as well – just email me and we’ll set up a time to talk.

Sunday, September 20, 2020

What about FedRAMP?


If you're looking for my pandemic posts, go here.

After last week’s post on CIP and the cloud – which painted a pretty gloomy picture of the likelihood that BES Cyber Systems will be able to be “legally” (as far as CIP is concerned ) placed in the cloud in the near future – a good friend of mine, who is CISO of a large NERC entity, dropped me an email which led to a good exchange in which we discussed three major points. This post discusses one of those points. I’ll discuss the other two in subsequent posts.

My friend started off by pointing out that a FedRAMP certification could easily be seen as evidence of compliance with a number of the CIP-003 through CIP-011 requirements, since it’s doubtful there’s any requirement in those standards that isn’t addressed in some way in FedRAMP already. He also noted that at least a couple groups within NERC or one of the Regions have engaged with one of the major cloud providers, presumably to see how perhaps some of the FedRAMP controls might be accepted as compliance evidence for at least some CIP risks.

This idea has been discussed for a while, especially within the Compliance Input Working Group (CIWG) of the late, lamented NERC CIPC (which was this year swallowed whole – and thoroughly digested, it seems - by the new Reliability and Security Technical Committee or RSTC). However, it hasn’t been discussed in the concept of BCS in the cloud – just of BCS Information (BCSI) in the cloud.

In fact, the CIWG discussed this idea when they started considering how the CIP standards could be modified to allow BCSI to be stored in the cloud at least a couple of years ago. As I discussed briefly in the previous post, the drafting team that was later assigned the task of making this happen has focused on a different solution to the problem, which I prefer because it takes a more comprehensive, risk-based approach. But I believe the immediate BCSI problem could also have been solved by changing the Measures for the requirements in question, so that FedRAMP certification would be accepted as evidence of compliance.

However, my previous post pointed out that the problem of BCSI in the cloud is very different from that of BES Cyber Systems themselves in the cloud – and the latter simply has no good solution within the current CIP standards. The biggest problem is that so many of the CIP-003 through CIP-011 requirements would apply either to individual cloud employees or to individual cloud systems, and there must be documentation of every instance when a control was applied. There’s simply no way any cloud provider could ever provide the required evidence without breaking their business model.

I suppose that it might be possible to “solve” this problem by kind of “forking” the Measures sections of the requirements. In other words, there would be two ways an entity could demonstrate compliance with each requirement. One is to have the documentation currently required. To use the example of CIP-007 R2.2 compliance, this means evidence that, for every piece of software installed on any Medium or High impact BCS or PCA, the entity “contacted” the patch source to determine whether a new security patch has been issued in the last 35 days (and of course, this evidence needs to be available for every piece of software – in fact, every version of every piece of software used on a BCS or PCA - in scope, for every month of the audit period).

The other fork would be for the NERC entity to show that the cloud provider where the BCS was implemented has a FedRAMP certification, and beyond that, they have a passing grade (or whatever it’s called) for the FedRAMP requirement that “maps” to the CIP requirement in question.  Now, I want to ask you (and I request you answer honestly): If for example you have 1,000 pieces of software within your ESPs, would you find it easier to:

1.      Gather 1,000 pieces of evidence that you had contacted a patch source every month, with the result that you will need to have those 36,000 pieces of evidence all indexed and available for your next audit (which of course will be roughly 36 months after your last one) – and of course, woe betide you if you’re missing more than one or two of those 36,000 pieces of evidence (yea verily, great will be the weeping, wailing and gnashing of teeth of the poor souls condemned to this hell); or

2.      Just get the cloud provider to copy the section of their FedRAMP certification that shows they have in place controls somewhat similar to those in CIP-007 R2.2 (OK, so it might be a little more complicated than that. But certainly nothing like the first option)?

If you said number 2, I’m sure you’ll agree with 99.9% of the other readers – in fact, I’d seriously wonder about anyone who said item 1 might be easier (and remember, if FedRAMP were to be included in the CIP Measures in this way, it would only have been with the prior agreement of the major cloud providers that they would provide the required evidence. In fact, they could just provide it once to each Region, rather than make every entity in the Region obtain it and submit it. So this might even be a zero-effort option).

What will be the effect of changing the Measures section of each CIP requirement to include this FedRAMP “get out of jail free” card? You got it: as soon as it was clear these changes had been approved by FERC, just about every CIP entity with Medium or High impact BCS would be on the phone to their friendly neighborhood cloud provider, making arrangements to transfer as many of their BES Cyber Systems as possible into the cloud, probably the day after the implementation date for the revised standards.

And this, Dear Reader, is why I don’t think the idea of NERC simply waving its hands and declaring that FedRAMP certification is evidence for CIP compliance is really going to be successful. Sure it will enable those entities who already wanted to do this to move their BCS to the cloud. But it also would literally force all other entities to do their darndest to move their BCS to the cloud as well, whether or not they had security or other concerns about doing this. And believe it or not, this wouldn’t be good for the cybersecurity of the grid.

In other words, changing the CIP standards so that BCS can be installed in the cloud doesn’t have an easy solution. Two hard questions need to be addressed first:

1.      How can the CIP standards be rewritten so that they don’t require evidence based on individual instances of compliance – i.e. evidence that controls were applied for particular systems or for particular individuals? The point is that it won’t help to fix this problem for NERC entities that have BCS at cloud providers, but not at the same time for entities that aren’t inclined to pick up their BCS and move as many of possible into the cloud as soon as possible, without a full consideration of the risks. Unless you want to make the latter as hard-to-find as the passenger pigeon or dodo bird, of course. I gave some brief hints at the answer to this question in my previous post.

2.      Are there any serious cyber risks that apply to cloud providers, that aren’t addressed either by CIP or by FedRAMP (spoiler alert: I think the answer is yes, as discussed in this post, and this one)? If so, doesn’t that mean there might need to be some new CIP requirements before the Good Housekeeping Seal of Approval is bestowed on the cloud providers, FedRAMP or no FedRAMP?

I will discuss this second question in the next of the three posts in this series, coming soon to a blog near you.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you wondering if you’ve forgotten something for the 10/1 deadline for CIP-013 compliance? This post describes three important tasks you need to make sure you address. I’ll be glad to discuss this with you as well – just email me and we’ll set up a time to talk.

Saturday, September 19, 2020

An emergency directive from CISA

 

Kevin Perry forwarded me this emergency directive from CISA about the Netlogon Remote Protocol vulnerability. He says if you haven’t mitigated this vulnerability, you need to do it asap.

CISA Releases Emergency Directive on Microsoft Windows Netlogon Remote Protocol

09/18/2020 09:48 PM EDT

Original release date: September 18, 2020

The Cybersecurity and Infrastructure Security Agency (CISA) has released Emergency Directive (ED) 20-04 addressing a critical vulnerability— CVE-2020-1472—affecting Microsoft Windows Netlogon Remote Protocol. An unauthenticated attacker with network access to a domain controller could exploit this vulnerability to compromise all Active Directory identity services.

Earlier this month, exploit code for this vulnerability was publicly released. Given the nature of the exploit and documented adversary behavior, CISA assumes active exploitation of this vulnerability is occurring in the wild.

ED 20-04 applies to Executive Branch departments and agencies; however, CISA strongly recommends state and local governments, the private sector, and others patch this critical vulnerability as soon as possible. Review the following resources for more information:

 

 

Friday, September 18, 2020

A death from a ransomware attack

If you're looking for my pandemic posts, go here.

Kevin Perry today forwarded me this article that tells how a patient died due to a ransomware attack on a major hospital in Dusseldorf. Because 30 servers at the hospital had been encrypted in the attack, a woman who needed urgent admission couldn’t be admitted, and she died on her way to another hospital.

Ironically, it seems the attackers thought they’d attacked the university with which the hospital is affiliated. When the contact listed on the extortion note was told that they’d actually attacked a hospital, they immediately provided the encryption key. But it was too late for the woman.

 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you wondering if you’ve forgotten something for the 10/1 deadline for CIP-013 compliance? This post describes three important tasks you need to make sure you address. I’ll be glad to discuss this with you as well – just email me and we’ll set up a time to talk.

Wednesday, September 16, 2020

Which way to the cloud?

This post is based on my keynote presentation for Tripwire’s Energy Working Group on August 18 – the first time that was a virtual event. I want to thank Tripwire for inviting me to present; it was a great experience.

Introduction

In the last year or two, I’ve had a lot of conversations with people in the NERC community who wonder where things stand with the cloud and the NERC CIP standards. There’s a lot of interest in this topic, simply because just about every electric utility is using the cloud in some way on the IT side of the house, while at the same time most of them are holding back on the OT side because they aren’t sure what’s allowed and what’s not.

I’ll discuss what is and isn’t allowed today, but the bigger question is “When will this situation change? When will we be able to use the cloud as freely on the OT side of the house as we do on the IT side now?” I’d like to first go through the situation as I see it now, and then discuss where (if anywhere) we go from here.

BCSI in the cloud

The first thing to keep in mind is that the question of whether NERC entities are able to put BES Cyber System Information (BCSI) in the cloud is very different from the question of whether they can put BES Cyber Systems (BCS) themselves in the cloud. Both cases are “illegal” from the NERC CIP point of view now, but one is borderline legal while the other is very much on the wrong side of the law.

Of course, the borderline legal case is BCSI in the cloud. Many NERC entities now have BCSI for Medium or High impact BES Cyber Systems in the cloud, usually because it’s stored there by a cloud-based app like configuration management or vulnerability management. I know at least some Regions are explicitly allowing their entities to do this, and I would guess all of the others will allow it to happen – although if you have any doubts, you should raise this issue with your Region before you make the plunge. And of course, you need to take steps to mitigate risk, especially by encrypting the BCSI both at rest in the cloud and in transit to and from the cloud.

The primary reason that BCSI in the cloud isn’t “legal” now is that no cloud provider could provide you the evidence you need to comply with three requirement parts in CIP-004, as Kevin Perry pointed out for me in this long-ago post in 2017; his comments remain as correct today as they were then. But the difference today is that help is on the way: A drafting team is in the middle of balloting changes to CIP-004 and CIP-011 that seem to be a very creative (and risk-based!) way to allow BCSI to safely reside in the cloud, without requiring – for example - that AWS document that they’ve removed physical and logical access to any server housing your BCSI within a day, whenever any one of their 200,000 (or whatever the number is) data center employees is terminated. So even if you’re hesitant to have BCSI in the cloud now, it will be perfectly legal within probably two years, if not sooner.

BCS in the cloud

However, the situation is very different when we start to talk about putting BES Cyber Systems themselves (as opposed to just information about them) in the cloud, although doing this wouldn’t be hard, since for example there are a number of cloud-based SCADA offerings today. There are two major problems that arise when we talk about putting BCS themselves in the cloud.

The first problem is that there’s no way to designate a cloud-based BES Cyber System in CIP-002 R1, which is of course the requirement where the NERC entity has to identify and classify all of its BCS. It’s easy to see the reason for this if you follow the logical chain of steps for identifying and classifying BCS, using CIP-002-5.1a R1.1 and Attachment 1:

1.      The first step is to identify Cyber Assets. These are defined as “Programmable electronic devices”, and as of now a “device” is a physical device, not a virtual one (that will change whenever the changes to allow virtualization – currently being drafted by the CIP Modifications SDT - are enacted, but as I’ll explain later, those changes don’t affect the question of whether BCS can be based in the cloud at all).

2.      Next is to identify BES Cyber Assets, defined roughly as a Cyber Assets that, if destroyed or compromised, could have an impact on the Bulk Electric System (BES) within 15 minutes.

3.      Finally, create BES Cyber Systems, which are groupings of one or more BES Cyber Assets.

So a BCS is ultimately composed of Cyber Assets, which are physical devices (my definition of a physical device is one that will hurt if you drop it on your foot. A cloud-based Cyber Asset won’t hurt if you drop it on your foot, if you could even figure out how to hold and drop it in the first place). Ergo, a BES Cyber System can’t be based in the cloud.

Second, even if you could somehow identify BCS in the cloud, there would be no good way to document compliance with a number of CIP requirements. This includes all requirements in CIP-004, CIP-005, CIP-006, CIP-007 (except for R3), CIP-010 (except perhaps for R4) and CIP-011 R2. All of these requirements mandate actions on particular physical devices, and those actions need to be documented in every instance. A cloud provider couldn’t document which physical servers your BCS is located on at a particular point in time, let alone over a 3-year audit period. It would break their business model if they were required to do this.

The bottom line is that Medium and High impact BCS can’t be located in the cloud if an entity wants to be compliant with CIP-002-5.1a through CIP-011-2, or with CIP-013 for that matter. So what’s the solution? Just as there are two CIP problems that currently prevent BCS from being located in the cloud, there are two components to the solution. They are closely related.

The solution – first part

The first problem is that currently a BES Cyber System can only be a collection of physical devices. The current CIP Modifications standards drafting team saw the solution to this problem in June 2018, when they proposed the idea that the terms Cyber Asset and BES Cyber Asset be dropped. BCS would now be the fundamental term for determining scope of the NERC CIP standards (and the SDT proposed changing the BCS definition so it included what’s in the BCA definition now: 15-minute impact, etc).

This change allowed virtual machines to be subject to CIP. With this approach, hardware devices would no longer be the focus of the CIP standards. A BCS could be based anywhere, including at cloud data centers. The individual hardware would never have to be identified, since Cyber Asset and BES Cyber Asset would no longer have any meaning in the CIP standards. BES Cyber System would be the fundamental unit of compliance.

Of course, the SDT developed this solution to allow virtual devices to be BES Cyber Systems. But this would have the same effect for systems based in the cloud: If their loss, misuse, etc. could cause a BES impact within 15 minutes, they could legitimately be BCS. This means that in 2018, the SDT unintentionally solved the first problem preventing BCS in the cloud.

The solution – second part

However, the SDT knew another step was needed. They realized there were some CIP requirements, like CIP-007 R1 and R2 and CIP-010 R1, that were too prescriptive and hardware-focused to work in this new system. They decided to rewrite those in a risk-based format.

Of course, these requirements would still focus on hardware. But they would require a risk management plan like CIP-013 R1 does, not a specific set of actions on specific hardware. Most importantly, the entity wouldn’t have to document they had performed particular actions on every Medium or High impact BCS component (BCA or PCA). They would just show they developed a risk management plan and implemented it.

For example, take CIP-007 R2 (please!). I don’t remember what exactly the drafting team did with this requirement, but here is how I would rewrite it in a risk-based format:

As you probably know, CIP-007 R2 (patch management) requires a set of mitigations that address the risks posed by unpatched software vulnerabilities. R2 requires the entity to 1) check every 35 days for new security patches for every piece of software in its ESPs; and 2) determine whether the patch is applicable to one of their systems. If it is, then 3) the entity needs to apply the patch within another 35 days. If the patch can’t be applied in 35 days, then 4) the entity needs to develop a mitigation plan, which itself needs to be reviewed every 35 days…etc. A cloud provider can’t do this for even one of their servers, let alone hundreds or thousands. It would break their business model.

How would I write this as a risk-based requirement? First I need to identify the risk the requirement addresses. Patch management isn’t a risk – it’s one mitigation for the risk of unpatched software vulnerabilities. If we were to replace CIP-007 R2 with a requirement for managing risks due to unpatched software vulnerabilities, it might read something like CIP-013 R1 does: “Develop and implement a risk management plan to identify, assess and mitigate risks arising from unpatched vulnerabilities in software or firmware installed on BES Cyber Systems.”

However, there are actually four types of risk due to unpatched software vulnerabilities; the current CIP-007 R2 only addresses one of them: the risk of vulnerabilities for which a patch has been developed but not applied. The obvious mitigation for this risk is to apply the patch! So this is the risk that is mitigated by the current CIP-007 R2.

The three other types of risk due to unpatched software vulnerabilities are:

1.      Risks due to vulnerabilities that have been identified in software or firmware, for which no patch will be released in the near term (e.g. the supplier has informed you they will not have a patch soon, for whatever reason. Or perhaps the supplier has gone out of business, or they have discontinued support for the product).

2.      Risks due to vulnerabilities that have been identified in open source or third-party software components included in a software package you have purchased, but for which no patch will be available soon.

3.      Risks due to vulnerabilities in custom software developed by or for your organization.

My new CIP-007 R2 would address all these risks, not just the risk of vulnerabilities for which a patch has been issued. For each of these types of risk, the entity would need to:

a)      Decide whether this is a risk that could ever have more than a low likelihood of being realized. In the case of number 1, the risk has to be assessed for each software supplier. The NERC entity might decide that Microsoft™ will never stop patching products without providing plenty of notice to customers; therefore they have low likelihood for this risk. In the case of number 3, if the organization doesn’t develop any software on its own, the likelihood of this risk being realized is (extremely) low.

b)     If the likelihood will always be low – for all suppliers – the entity would be justified to simply state that this is the case and move on to the next risk. In my methodology, a risk that always has low likelihood is one that has already been mitigated or else simply doesn’t apply to your environment (e.g. risks due to vendor remote access, if your organization doesn’t allow it under any circumstances).

c)      If the likelihood could every be higher than low, the entity does need to take steps to mitigate the risk - but in the case of risks due to suppliers like numbers 1 and 2 above, no mitigation is needed if the supplier already has a low likelihood for this risk, as in the case of Microsoft for risk number 1.

Making the prescriptive CIP requirements risk based was the second part of the SDT’s solution for allowing virtual devices to be covered by CIP. But, again unintentionally, the SDT also pointed to what’s needed for cloud-based systems to be covered. If all the CIP requirements were risk-based, then cloud providers would just have to show that they have good programs for supply chain risk management, software vulnerability risk management, user access risk management, etc. There would be no need to document actions performed on individual devices or with respect to individual employees, to show they’re compliant with CIP. Certifications like FedRAMP might in some cases be accepted as sufficient evidence of a program.

To summarize this section, in 2018 the CIP Modifications Standards Drafting Team came up (at least in principle) with the entire solution required to allow BES Cyber Systems to be put in the cloud. If they had followed through with these changes and they had been approved by the NERC ballot body, the NERC Board and FERC, NERC entities would soon be able to do just that.

What happened?

However, the SDT’s 2018 proposal was abandoned when a lot of NERC entities said they didn’t want to have to implement the big changes that would be required in their current compliance programs. I was quite disappointed when I heard this had happened.

The SDT has now moved on to a much more conventional approach. Instead of getting rid of the hardware device concept altogether, they are expanding the meaning of “device” to include virtual devices. This may address virtualization, but it does nothing for BCS in the cloud. Cloud providers can no more comply with prescriptive requirements for virtual devices than they can comply with prescriptive requirements for physical devices – there is no way they can demonstrate compliance on any kind of “device”, since their business model is built on being able to continually move data and software code between devices and data centers. We’re back at square one, as far as BCS in the cloud goes.

So how do we move forward?

Of course, the CIP Modifications SDT was never given the mandate to address BCS in the cloud. This isn’t their responsibility, and it never will be. To finally address this problem, there needs to be a new Standards Authorization Request (SAR) and a new SDT to address the changes that are required to allow BCS to be put in the cloud. But I don’t want to see a SAR that just requires a new SDT to “solve the problem of BCS in the cloud” or something like that. If there isn’t buy-in from the NERC community up front about the right approach to take, the new SDT will just run into the same problems the CIP Mods SDT did.

I think there needs to be a series of national NERC meetings (virtual, of course) where there’s a discussion of a) what needs to be changed for BCS to be permitted in the cloud, and b) how to make those changes. Only when there’s general agreement on the way forward (perhaps confirmed by a ballot) should a SAR be drafted and a new SDT constituted.

Hopefully, the national meetings will lead those who are reluctant to change their existing CIP programs to understand the cost of this reluctance: If they continue to insist they can’t change, they (or any other NERC entity with Medium or High impact BCS) will never be able to use cloud-based BES Cyber Systems. Period.

Of course, this will require a much more fundamental revision of the CIP standards than even CIP version 5 was. Doing what I’m suggesting will require widespread support among NERC entities, and I see no sign of that now. Does that mean BCS will never be allowed in the cloud?

I actually believe it will happen, although I won’t say when because I don’t know. I think the advantages the cloud can provide for NERC entities are so great that they will ultimately outweigh the general resistance to change.

But I’ve been wrong before…

 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you wondering if you’ve forgotten something for the 10/1 deadline for CIP-013 compliance? This post describes three important tasks you need to make sure you address. I’ll be glad to discuss this with you as well – just email me and we’ll set up a time to talk.

Sunday, September 13, 2020

How deep do you need to go for CIP-013?

 If you're looking for my pandemic posts, go here.

Last Thursday, Fortress Information Security sponsored an excellent two-hour webinar titled “Exploring Opportunities to Secure the Grid's Supply Chain” . You can download the slides here, but I really recommend you listen to the recording, since there was a lot of great discussion that went far beyond what was in the slides. You can access the recording here.

The webinar featured a very well-chosen group of speakers who I wouldn’t normally expect to present in the same webinar – yet it worked very well. They included speakers from Fortress (Tobias Whitney, formerly of NERC of course), Lew Folkerth of RF (sporting an enhanced beard, which looked great. It seems that sheltering in place has served Lew well), Jeff Sweet of AEP, Val Agnew from NATF, and three speakers from Israel. Two of these were Yosi Scheck of Israel Electric Corporation (which serves the entire country) and Aviram Atzaba of the Israel National Cyber Directorate.

All of the presentations were good, although the presentation by Aviram Atzaba really stood out for me because it addressed supply chain cyber risks explicitly, and discussed (at a high level) four supply chain cyber attacks that Israel Electric has experienced (and it sounds like there have been others as well). I would like to do a post soon on some of the interesting things I learned in the webinar, but first I need to listen to the recording, since I couldn’t write things down fast enough.

However, I do want to discuss a question that both Tobias and Lew addressed, although Lew deliberately went beyond the question to address a larger issue. Since this question is quite relevant to CIP-013, and since somebody recently told me that the compliance date for CIP-013 is coming up 😊 (is it November 3? No, that’s another important date), I want to discuss it now.

The question (and I didn’t write the words down, so this is just a recap of what I believe was the thought behind it) was “When a large supplier uses a dealer channel, they will usually not answer a questionnaire, although the dealer will. Does this mean we just need to assess the dealer, not the supplier itself?”

Tobias read the question and answered it by saying something to the effect of “You need to assess the risks that can have an impact on the Bulk Electric System, whether they originate with the supplier or the dealer.” Tobias’ answer was exactly correct, IMHO. However, since he didn’t have time to elaborate on that statement, I will do that for him here:

Let’s say we’re talking about Cisco™. They don’t sell directly to any NERC entity that I know of; they almost always use an intermediary, who at most just inventories Cisco products and reships those to end users, then invoices the end user (sometimes they don’t even touch the box – they have it drop shipped directly from Cisco to the end user). I call the dealer a vendor, as discussed in this post.

What risks apply to the supplier vs. the vendor? Of course, when the supplier and the vendor are the same organization (which of course is the case with all but the largest suppliers, like Cisco and Microsoft™), all of the risks apply to that organization. But when they are different as in this case, there are very few risks that apply to just the vendor – in fact, I have only identified two that I consider significant, one being that a vendor will ship you a counterfeit product, as has happened in at least two cases with Cisco products (including one very recently).

However, while the counterfeit product would be annoying, it wouldn’t in itself pose a risk to the BES (as opposed to a financial and operational risk to the utility that bought it, which is definitely the case). What would pose a BES risk would be if the counterfeit contained malware (which has not been the case with the Cisco knock-offs so far). So the risk is that a vendor will ship you a counterfeit product that contains malware, which could damage the BES if installed.

On the other hand, there are a huge number of risks that apply to the product supplier. For example, there are the risks included in CIP-013-1 R1.2.1, R1.2.2, R1.2.4, and R1.2.5, as well as a host of other risks, such as the risks behind almost all of the 60 NATF criteria (the few risks having to do with secure product shipment are the only criteria that could possibly apply to a pure product vendor – i.e. one that doesn’t also provide services for BES Cyber Systems. If the product vendor also provides services, they would be both a service vendor and a product vendor in my way of seeing things. This means the risks that apply to service vendors, as well as those that apply to product vendors, would apply to the same organization.

In other words, if you just send a questionnaire to the product vendor, you will only be assessing a small portion of the total risks that apply to the supplier and vendor. But if the supplier won’t answer your questionnaire, how do you assess them?

I’m glad you asked. There are a number ways, including looking at various publicly available sources of information (Fortress offers a service that does this, which was mentioned by Jeff Sweet in his presentation) and reading documents on their web site (in particular, Cisco has at least a couple documents discussing their secure development and vulnerability management processes). It’s true that neither of these methods will yield you the same information that a questionnaire would, but the point is they’re better than not doing anything to assess the supplier.

After Tobias gave his answer (which was a little shorter than the one I just gave), he turned the question over to Lew. Lew said he wanted to table the question until the end of the webcast. When he came back to answer it in the end, he expanded it to be part of the more general question that’s been asked by many people about CIP-013: We all know that CIP-013 applies to your organization’s immediate suppliers, but what out those suppliers’ suppliers? And even the suppliers’ suppliers’ suppliers? Where do you cut it off?

And like Tobias, Lew’s answer was very simple (although I’m just paraphrasing it): Your CIP-013 R1 supply chain cybersecurity risk management plan needs to describe how your organization will mitigate supply chain cybersecurity risks to the BES. Let me elaborate on what Lew said, since after all this is my blog and I can say what I want:

I think it’s perfectly acceptable to say that your R1 plan must describe how you will mitigate only important supply chain cybersecurity risks. This means that you don’t need to mitigate all risks down to say the 99th level. In practice, the likelihood that a risk applies to a supplier that’s four levels down from your immediate supplier makes it fairly unlikely there will be an impact on the BES if the risk is realized.

Let’s say that fourth-level supplier didn’t have strong access controls on its remote access system, leaving it open to compromise by the Russians – and DHS said in 2018 that the Russians penetrated over 200 vendors to the power industry, primarily through their remote access systems. While it might be possible that the Russians penetrated the supplier’s software development network and implanted a backdoor in a software module that was incorporated into a product that was itself incorporated into another supplier’s product that was itself incorporated into the software product you bought, it’s quite unlikely that the Russians will now use that backdoor to penetrate your BES Cyber System and attack the BES. Given this low likelihood, you would certainly be justified in ignoring this risk.

But what if your software supplier included a component from a second-level supplier that included the IP library from Treck that is subject to the Ripple 20 vulnerability? That might be something that could affect your BCS and therefore the BES. So you probably need to think about risks due to second-level suppliers, although first you have to know who they are, which is why getting a software bill of materials is a good first step.

However, your first line of defense against risks in the second, third, fourth, fifth, etc. levels of software suppliers is your supplier itself – i.e. the organization that wrote the software you’re using. What you need to be concerned about is their supply chain cybersecurity risk management plan. For example, does your supplier require their component suppliers to notify them of unpatched vulnerabilities in their software, and hopefully quickly develop patches for them? And if the component supplier can’t immediately patch a vulnerability in their product, will they tell your supplier about measures your supplier can apply to mitigate this vulnerability?

So my answer to the question Lew posed (and answered) is that a) you need to address risks that you believe have something more than a low likelihood of being realized, whether they pertain to your supplier or one of the supplier’s supplier’s supplier’s…suppliers…Yea verily, unto the hundredth generation. And b) your best defense against risks in those further generations is your supplier’s own policies regarding their suppliers.

 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you wondering if you’ve forgotten something for the 10/1 deadline for CIP-013 compliance? This post describes three important tasks you need to make sure you address. I’ll be glad to discuss this with you as well – just email me and we’ll set up a time to talk.

Tuesday, September 8, 2020

Three weeks left!



If you're looking for my pandemic posts, go here.

Just about three months ago, I wrote this post about what NERC entities still need to do to get ready for the October 1 compliance date for CIP-013-1. Now we’re almost exactly three weeks away from that date. I’m going to assume you’ve done most of what I pointed out in the July post (although if you haven’t, there’s still time to do it all. I won’t outline the reasons for my statement here – email me if you want to discuss this).

However, I’m also betting that you’re not 100% ready for the 10/1 date. You shouldn’t panic – you can definitely finish whatever you haven’t done. Here’s are three taks that you might not have finished (or even started) yet, which you definitely need to do in the next three weeks:

Finalize your R1 plan
The most important task is you need to finalize your CIP-013-1 R1 supply chain cybersecurity risk management plan. While you will still be able to improve it any time after 10/1, you need to have a fairly complete plan now. Here are the topics that should be addressed in your plan:

·        How you identify supply chain cyber security risks to the BES. This includes risks arising from procurement of hardware or software components of BES Cyber Systems, procurement of services for BCS, installation of BCS components, use of services for BCS, and transitions between vendors.
·        How you will assess risks that arise from vendors or suppliers of BCS components or services – using questionnaires and/or other means.
·        How you will assess risks that arise from actions your own entity takes.
·        How you will mitigate those risks, including the following. Note that none of these have to be written at a low level – a high level conceptual description is enough, as long as it’s comprehensive:

1.      Mitigations of Risks that apply to your entity.
2.      Mitigations that are applied through obtaining a Supplier’s or Vendor’s assent by adding or changing terms in their contract.
3.      Mitigations that are applied through obtaining a Supplier’s or Vendor’s assent through other means than contract language.
4.      Mitigations that are applied through Supplier/Vendor follow-up.
5.      Mitigations that are applied through Requests for Proposal.
6.      Mitigations that are identified during Procurement Risk Assessments and applied during Procurement of Products and Services, Installation of Products and Use of Services.
7.      Mitigations for the 8 Risks identified in CIP-013-1 R1.2.1 – R1.2.6.
8.      Mitigations for Risks arising from open source software.
9.      Mitigations applied to Emergency Procurements.
10.   Mitigations for Risks arising from vulnerabilities due to third party or open source components in a Supplier’s software or firmware Products.
11.   Mitigations for Risks arising from “Transitions between vendors”.
12.   Mitigations for Risks due to repurposed Products.
13.   Mitigations for Risks due to transactions with other utilities.
14.   Mitigations due to compliance with the NERC CIP-003-6 through CIP-011-3 Reliability Standards.

Remember, you can change your plan whenever you want (and the plan should say how it can be changed, presumably with CIP Senior Manager approval) after 10/1, so you don’t need to have it perfect that day. But you have to have something that addresses most of the areas shown above. You can’t have something that for example ignores R1 altogether.

P’s and P’s
As I have pointed out a number of times, even though you have close-to-complete flexibility in developing your R1 plan, when you get to R2 that plan becomes a straitjacket. You have to have a set of policies and procedures that implements your R1 plan, and all of the plan; moreover, you need to determine how you will provide evidence that you are actually following those policies and procedures.

And you need all of these p’s and p’s in place by October 1. Obviously that’s for compliance purposes, but it’s for another reason as well: If, in designing your p’s and p’s, you find a particular part of the plan that you’re not sure you can implement properly, take it out. You can always add it back later, if you realize it won’t be as hard to do as you thought. But if you leave the plan in place yet don’t have the p’s and p’s in place to implement it, you’re asking for a PNC for R2.

Along with designing the p’s and p’s, you need to decide how you will provide evidence of compliance with each policy and each procedure. And here’s the good news: “evidence” in CIP-013 doesn’t mean evidence that you have done the right thing in every particular instance and for every particular system in scope, as is the case in the prescriptive requirements that are part of CIP-003 through CIP-011. If your policy is that you will do X, you need to be able to show that you implemented the policy and provide some general evidence that it was followed – e.g. emails that show it was followed in a few specific instances.

The most important evidence
However, the most important evidence is what you will compile when you carry out item 6 in the list above: the procurement risk assessment. The NERC Evidence Request Tool v4.5 makes clear that CIP-013-1 evidence requests will be based solely on “procurements”. In Level 1, you will have to provide a list of every procurement during the audit period. In Level 2, the auditors will select a sample of these procurements and ask you to show how you carried out R1.1 and R1.2.

I think it’s very important not to leave it to whoever is in charge at your next audit to try to dig up all the evidence required to show this. You need to design a Procurement Risk Assessment (PRA) process - perhaps using spreadsheets - in which just carrying out the process (which you will do with every procurement. By the way, since “procurement” and “vendor” aren’t defined by NERC, you will need to define these, as well as other terms) will provide the evidence.

Or almost. Your PRA will need to provide lists of mitigations to be carried out during procurement of the product or service, installation of the product and/or use of the service. That evidence will need to be gathered when those activities are finished or ongoing, which of course is after the PRA itself is finished.


This might seem like something that’s very complicated, but it really isn’t. If you’d like to discuss this, drop me an email and we can set up a time to do so.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.