Sunday, February 23, 2020

Good for Schweitzer!

An important part of supply chain security risk management is assessing vendors to identify “holes” in their security posture that need to be addressed. That’s what this post is about, but first I want to discuss how assessment – and especially the use of questionnaires – fits into the overall supply chain security methodology that I and my CIP-013 clients have developed over the past year.

For CIP-013 compliance and supply chain security in general, NERC entities need to mitigate two types of procurement risks: those that come through Vendors and Suppliers (almost always risks that are due to inadequate security policies or procedures on the Vendor/Supplier’s part) and those that come through the entity’s own actions or lack thereof (these are also due to inadequate policies or procedures, but this time those of the NERC entity itself).

This post is about the first type of procurement risk: Vendor and Supplier risks (note that I – as well as my clients – think Vendors and Suppliers should be treated differently, since the risks that apply are very different in the two cases. I’ll use ‘Suppliers’ for most of the rest of this post, although I’ll always mean both). Vendor/Supplier risks are the ones that almost everyone thinks about when they think about supply chain risks, and indeed, probably 70-80% of the most important procurement risks do come through Suppliers or Vendors.

I call these risks Vulnerabilities, although I capitalize the term to distinguish it from the normal cybersecurity term vulnerability - which means a flaw in software or firmware that could lead to compromise of a system (a BCS in this case). In my sense, a Vulnerability means any situation that enables a Threat to be realized – and I define a Threat as something that could damage the BES. For example, if a software company doesn’t perform background checks on the developers that it hires, this Vulnerability could enable a terrorist or other person with malicious intent to plant malware or a backdoor in software or firmware in a BCS.

A Vulnerability can often be stated in this way: If the Supplier doesn’t do X, then it’s possible that the NERC entity (that would be you, Dear Reader) will procure and install one or more hardware or software components of BES Cyber Systems that have a vulnerability (lower-case, of course), leading to some impact on the BES. To continue with the above example, if one of your BCS Suppliers doesn’t perform background checks on their developers, it’s possible that your organization will procure and install a software or hardware component of a BCS that contains a vulnerability or backdoor, that could be exploited by a bad guy (or woman, of course!) to damage the BES.

The impact on the BES doesn’t need to be specified, and indeed there are close to an infinite number of impact types – what matters is simply whether or not there could be any BES impact at all. This is of course how the BES Cyber Asset definition works: If the compromise or loss of a Cyber Asset can impact the BES in any way, it’s a BCA. There’s no such thing as big or small BES impact.

Why does this matter? It’s no exaggeration to say that your most important steps to improving your supply chain cybersecurity (and achieving CIP-013 compliance, of course) are to a) “identify and assess” the most important Vulnerabilities that apply to Suppliers; b) determine, for each Supplier of BCS hardware or software components, which Vulnerabilities it has already mitigated and which it hasn’t; c) endeavor to get the Supplier to agree to mitigate whatever Vulnerabilities it hasn’t yet mitigated; and d) in cases where the Supplier can’t or won’t mitigate a Vulnerability, take steps on your own to mitigate it (although the steps you can take are usually not as effective as those the Supplier could take. But they’re better than nothing).

Of course, b) is the assessment step. There are three main way to assess a Supplier: audits, questionnaires and certifications. Audits are definitely the most expensive of these options. While there can certainly be times when an audit is necessary (e.g. when a Supplier just can’t be trusted to answer a questionnaire truthfully. But really…if they’re so untrustworthy, why are they still your Supplier?), it shouldn’t be the default option.

There are also definite reasons why you might want to use certifications to determine to what degree the Supplier has mitigated particular risks. The biggest is when a large company like Cisco or Microsoft clearly won’t respond to questionnaires (although it wouldn’t be a bad idea to try. Work through your VAR for this, since they’re most likely to know who to go to at the Supplier– and they’re definitely the most motivated party to help you find that person or group). Then you should look at the Supplier’s certifications, and try to find answers there to the security questions you would otherwise ask in a questionnaire.

But certifications have their limitations, too. The main limitation is that, for questions specific to OT, IT-focused certifications like ISO 27001, or SOC II audit reports, aren’t likely to provide answers. NERC has said they’re working on a certification specific to OT risks to the BES, so that will certainly be of interest when it’s finalized. However, that’s unlikely to occur anytime soon.

So I think the best way to determine to what degree a Vendor or Supplier has mitigated each of the threats or vulnerabilities that you’ve identified as important, is to send them a questionnaire. However, I’ve found people raise two objections when I say this:

  1. The Vendor may lie in their answers; or
  2. The Vendor won’t respond to the questionnaire at all.
 For the first question, it’s definitely true that Suppliers don’t always tell the truth, but Supplier relationships in this industry – especially in the OT environment – last for decades, if not centuries. Given that, it’s hard to believe a Supplier would put all that in jeopardy, just to score a short-term point or two by misrepresenting the truth in their questionnaire answers. And if you think you can’t trust the Supplier’s answers on a particular questionnaire, I’m sure your contract retains the right for you to audit them. Just about all such contracts have that clause (but again: If you’ve decided you really can’t trust the Supplier, why are you still buying their products at all?).

The second objection is a little harder to counter. I know that OT Suppliers definitely don’t like to answer questionnaires, especially if they think they’ll be forced to answer a large number of questions that are IT-focused (and any “information security” framework like ISO 27001/2 is inherently IT-focused), and have little if anything to do with risks to OT. Of course, it is OT risks, and especially risks to the Bulk Electric System, that are the concern of NERC entities subject to CIP-013.

One major OT software Supplier to the electric power industry put out a short white paper for their customers at the end of last year. Its purpose was to address two problems they’re having, related to CIP-013:

1.       Customers are sending them questionnaires that are both lengthy and mostly irrelevant to OT risks – asking how they will protect the customers’ intellectual property, personal information on employees, trade secrets, etc. While these are great questions to ask a Supplier of IT products, they’re mostly irrelevant for Suppliers of OT products. So this Supplier is being asked to spend a lot of time answering a lot of questions, when their answers won’t help the customer at all in understanding their OT supply chain security risks, and specifically risks applicable to CIP-013.
2.       Customers are asking them to sign addenda to contracts that are both lengthy and contain terms that are mostly irrelevant to OT risks – ordering them to take steps to protect the customers’ intellectual property, personal information on employees, trade secrets, etc. As with questionnaires, while these might be great terms to require of a Supplier of IT products, they’re mostly irrelevant for Suppliers of OT products. This is because your Suppliers should store or transmit little if any information on the configuration of your BCS (and how to locate them logically or physically, which I the definition of BCSI), and that is the only information whose misuse might lead to a BES impact. So this Supplier’s attorneys are being asked to spend a lot of time negotiating contract terms that in fact won’t help the customer at all in mitigating their OT supply chain security risks, and specifically mitigating risks applicable to CIP-013. Plus the customers (the electric utilities) are putting a big burden on their own lawyers, since you can be sure the Supplier’s lawyers will want to negotiate or remove almost every security term you ask them to include in a contract. This probably means hours of negotiations over every single term.

In this Supplier’s well-written letter, they didn’t say they weren’t going to respond to questionnaires, but they made it clear that a) they are going to be reluctant to respond to them, especially if they’re lengthy (and they specifically mention 100 questions, which I agree is too lengthy); and b) they now have policies and procedures in place to comply with ISO 27001/2 - they’ll be audited and certified on those in the near future. In other words, “Before you send us a security questionnaire, please look through ISO 27001 Appendix A (and perhaps their audit report, although they didn’t mention that at all - so I don’t know whether or not they’ll make it available to customers) to see if your questions are answered there. And by the way, don’t send us any long questionnaire at all.” 

This might be OK if a) the ISO 27001 certification will actually answer all, or even most, of the questions that NERC entities will need to answer for CIP-013 compliance purposes; b) this Supplier will actually help their customers find where in ISO 27001/2 their particular questions are answered (since Appendix A is a long document in itself, and ISO 27002 – which provides the real meat on Appendix A’s bones – is much longer, devoting an entire page to each control, rather than a sentence or two as in Appendix A); and c) the Supplier will provide the actual audit report, since it’s very possible that the Supplier might not be in compliance with every control, even if they have received the certification.

Of course, both b) and c) are up to the Supplier. However, I can legitimately take a stab at answering a). This is because I am in the process of developing a questionnaire with my clients, based on the set of 42 Supplier/Vendor Vulnerabilities that we have jointly identified, including those that are the basis for the required items listed in R1.2. Since this post is about questionnaires, not certifications (and since I hope to write a post on certifications in the near future), I can’t provide detail on this now. But I will say that almost none of the Vulnerabilities in my list is addressed directly in 27001 (and none of the items in R1.2 are fully addressed, either). So I feel quite safe in saying that, at least for the list of questions that my clients are likely to have, there are few that are directly answered by ISO 27001/2 (although I will reach out to this Supplier to see if they feel differently about this or anything else in this post. I’ll put their response – unedited! – in a future post, if they’ll allow that).

Item c) is actually very important. If this Supplier plans to just publish their overall score for the audit, that isn’t very helpful at all. In fact, any certification or service that just provides a single security score for a Supplier is close to useless for assessing OT Suppliers to the electric power industry. Why? Consider this: How often does your organization either 1) choose a completely new vendor on the OT side of the house or 2) conduct a thorough assessment of all of the competitive Suppliers to whoever is your current Supplier for a product or class of products?

One of my clients had a succinct answer to this: Once every five years at most. Even if it’s more frequent for you, my point is this: An overall security score for a Supplier is designed to aid a decision on whether or not you will buy from a particular Supplier, or perhaps to serve as an input to choosing among multiple suppliers of similar or identical products. It provides little or no help for the task that you perform much more frequently: procuring a product from an existing Supplier, from which you may or may not have previously procured the identical product.

And this is exactly the scenario that’s addressed by the CIP-013 section of the NERC Evidence Request Spreadsheet v. 3.0 (as well as the just-released v. 4.0, which seems completely unchanged, at first glance). For each procurement during the audit period, the NERC entity needs to comply with the wording of R1.1 and R1.2. In other words, you need to “identify and assess” the procurement and installation risks that apply to each procurement, as well as explicitly assess each of the risks addressed in R1.2.  You can’t do this with an overall risk score for the Supplier; you need some sort of score for each risk (Vulnerability) that you identify in R1.1, as well as the six items in R1.2 (and there are really eight risks in R1.2, since R1.2.5 and R1.2.6 address two risks each). This score will let you decide whether each particular risk (each Vulnerability, in my terminology) has already been mitigated, or whether you need to take additional steps during this procurement to mitigate it. This is the beating heart of CIP-013 compliance, since this is the entirety of the evidence that will be required up front by the auditors.

The upshot of this is that, since this Supplier doesn’t want to commit at the moment to answering all questionnaires (of reasonable length) from customers, and since I don’t yet know what they will say about questions b) and c) immediately above, I have to give the supplier an Incomplete grade at the moment. And I’d like to point out to them that they need to make sure that, whatever answer they give to my questions, they should make sure it will satisfy what is required by the Evidence Request Spreadsheet.

Note on Sunday evening: I put up this post earlier in the day and sent the link to a security person I know at the Supplier in question. He got back to me fairly quickly, and we're now engaged in an email dialogue that will I hope nail down what they're actually prepared to do, which may very well meet what I think is required. Rather than make an already-long post even longer, I'm going to write about that in another post, hopefully Monday or Tuesday night.

You may be wondering what the title of this post has to do with the content (not that the titles I provide for my posts ever tell you exactly what the content is). Well, I heard a few weeks ago – in response to a question I’d sent – from a different Supplier, one that just about every (or perhaps every) electric utility in the US uses: Schweitzer Engineering Labs.

I asked George Masters of SEL, whose title is Lead Product Engineer, Secure Engineering – and a good friend of mine from the NERC CIPC Supply Chain Working Group – what SEL’s position on customer security questionnaires was. His answer was unequivocal:

SEL uses security processes which combine best practices from a variety of standards and from our own experience inventing, designing, and building products and systems, then tracking their performance across decades of service life. We will always respond to questionnaires from our customers, as we understand that they are working to implement processes to meet CIP requirements as well as their own unique requirements.

We are watching for certifications or similar assurance mechanisms to emerge that can be broadly accepted by our customers as evidence that their requirements are being satisfied. We have taken that approach in other domains. As an example, our Quality Management System is certified to the ISO 9001 standard, and our manufacturing processes comply with the IPC-A-610 Class 3 workmanship standard for products requiring high reliability, such as those used in life-support and aerospace systems (see

There you have it. No qualifications, no ifs, ands or buts. This is another example of SEL’s commitment to supply chain security, which I experienced when I was moderator for the panel on that topic at last October’s GridSecCon. NERC had asked SEL to have someone join the panel, and I was expecting they would send George or one of his peers (which would have been fine with me). Instead, they sent Dave Whitehead, who was COO at the time and is now CEO.

Dave gave a good (five-minute, since that’s all that was allowed) presentation, and answered a couple questions very well. Plus he prepared a document for me to publish, when I requested afterwards that panelists do that. In the document, I suggested that the panelists repeat what they said during the panel (including the answers to the questions they received), and also go beyond that if they’d like. Dave prepared an excellent document summarizing their security and quality practices – in fact, on rereading it just now, I realized that this in itself answers many of the questions in my questionnaire, for SEL. You can find it here.

Wednesday, February 5, 2020

What, I can use my best judgment?

Since I consider CIP-013 compliance to be at heart the responsibility of the NERC entity, not mine, my consulting approach for helping a NERC entity come into compliance consists of a series of 1-3 weeklong workshops with cybersecurity, compliance, procurement and (sometimes) legal people. In these workshops, I go through what CIP-013 says (that’s an easy one: It says what’s in the requirements; nothing more, nothing less. Although the Evidence Request Spreadsheet for CIP-013 also provides some very good information on what audits will focus on), as well as the “crowdsourced” methodology and MS Excel™ workbooks that I and my clients have developed over the past year (the methodology and workbooks continue to develop, although the changes nowadays are more in the fine points, not fundamental concepts).

CIP-013 compliance, and certainly my methodology, involves some concepts that don’t come easily. I consider an initial workshop to be very successful if even two or three people get the ideas I’m talking about. And I find the people who are hardest to convince are the Walking Wounded – people who have been beaten into a submissive state by the prescriptive CIP requirements (most but fortunately not all of the existing requirements) for years (usually with PTSD from one or two bad audits). They have given up forever the idea that they can ever make sense of what CIP requires them to do and guide their actions by the criterion of what’s sensible. They pray that someone, somewhere will provide them the magic key that will unlock the true meaning of all the CIP requirements, so that they can implement their compliance programs in complete confidence that the auditors will love what they do.

And barring delivery of that magic key, they pray for an early death.

I bring this up because, in a conversation at one of the meetings during this week’s workshop at a medium-to-large-sized municipal utility, we got into a discussion of what would happen if, during a Procurement Risk Assessment, they decided that mitigating the remaining residual risk from a particular threat would just be too costly – so they decided to accept the risk. Would they have the book thrown at them at their next audit and the utility would be bankrupted by the fines?

I said no (and by the way, I’m paraphrasing this discussion, since I don’t remember the exact details). In CIP-013 compliance, if mitigating a particular risk will require an unreasonable amount of cost or effort, and if the risk isn’t one that might likely involve loss of human life or limb if realized, you can certainly accept the risk if that is the reasonable thing to do. At that point, a woman from procurement sitting next to me asked something like “What, you mean we can use our best judgment?”

If you think about it, this is fairly sad. In most cybersecurity compliance regimes – HIPAA, PCI, the NIST frameworks, etc. – the organization is allowed to use their judgment to determine what’s a reasonable action, including accepting risks that can’t be mitigated at a reasonable cost. But people who have worked in organizations where CIP compliance is a big deal (even if they haven’t worked directly in compliance, as was the case with this woman) have just come to accept that they have no choice but to do whatever they think is required, regardless of what’s reasonable.

So for all of those people, I have good news: With CIP-013, you’re free, free! Now all that’s required is to rewrite all of the other CIP standards as risk-based ones like CIP-013, and you’ll be truly free. CIP compliance people of the world, unite! You have nothing to lose but your chains!

Thursday, January 30, 2020

What’s up with BCSI in the cloud?

Two weeks ago, the standards drafting team that’s addressing the issue of BES Cyber System Information (BCSI) in the cloud held a two-hour webinar, of which I listened to the first hour (the link to the recording and slides can be found by going here and dropping down the “2020 Webinars” list). I’ll admit I haven’t been following what this team is doing closely, so I was surprised by some of what they said.

I had planned to write a post on this in a few weeks. But lately, a number of different people – in very different contexts – have asked me about BCSI in the cloud. Since I haven’t directly addressed the topic of BCSI in the cloud, and why it isn’t allowed by the current CIP requirements, even though a lot of NERC entities are doing this anyway, here’s the way I understand it. Note that this isn’t a complete description of what is in the new draft CIP-011-3, just what I think is important to how the new draft addresses the basic problem of BCSI in the cloud).

Also note that this post is about BCSI in the cloud. BCS in the cloud is a completely different story. Hopefully I can make a full frontal assault on that issue soon, but for the moment keep in mind that if you put BCS in the cloud – e.g. outsourced SCADA for a Medium or High asset – you’re violating just about every requirement of CIP-005, -006, -007 and -010, as well as some others. So this remains completely forbidden, and unfortunately will remain so for some time.

  • The big problem with putting BCSI in the cloud is that there are three or four requirement parts in CIP-004 that an entity could never comply with, if they put BCSI in the cloud.
  • Let’s take the example of CIP-004-6 R5.3, “For termination actions, revoke the individual’s access to the designated storage locations for BES Cyber System Information, whether physical or electronic…, by the end of the next calendar day following the effective date of the termination action.” You might think this should be almost a no-brainer for a cloud provider. I doubt there’s any major cloud provider that doesn’t remove access to storage locations for any information (BCSI or otherwise) within minutes of an individual being terminated, not one day.
  • However, there are actually two big problems, which I wrote about in early 2017. The first is the Measures that apply to this requirement part: “An example of evidence may include, but is not limited to, workflow or signoff form verifying access removal to designated physical areas or cyber systems containing BES Cyber System Information associated with the terminations and dated within the next calendar day of the termination action.”
  • This means (in part) that the cloud provider will have to have evidence that shows physical access to systems storing BCSI was removed in the next calendar day for every employee or contractor who has physical access to BCS, once they’re terminated. So if there are say 10,000 people who worked at a cloud data center during a three-year audit period, and the systems that store BCSI aren’t protected by some sort of enclosure with a dedicated card reader, then the cloud provider will have to provide that evidence for every one of the 10,000 employees who was terminated during the audit period. It’s fairly safe to say that no cloud provider could or would provide this.
  • But it gets worse. The second problem is that the cloud provider has no way of even knowing – or certainly of documenting – what systems your BCSI was physically stored on during any given day, let alone during a three-year audit period; in fact, they probably aren’t even sure what data centers it was stored in. So even if they could provide the required evidence for particular systems, there’s no way to know which systems to provide it for.
  • You might ask “Then why couldn’t the cloud providers do what NERC entities have to do if they store BCS at a data center that they control? They need to show it’s within a Physical Security Perimeter, which typically mean a rack with at least a door and its own card reader. That way, the card reader would provide all the compliance evidence that the entity needs.” Of course, if a cloud provider did that, they would completely break their business model.
  • If it does that, the cloud provider becomes an application hosting organization that essentially runs applications for individual customers, saving the latter the muss and fuss of provisioning and running them themselves. This is a valid business model and is now pursued by lots of providers. But it isn’t the cloud – the cloud model only works if the provider is free to move data from system to system and even data center to data center regularly, and even break a particular data set apart as it does that. Asking a cloud provider to do that means you’re really asking them to set up a separate application hosting division (and it’s likely some cloud providers have such a division), and to price its services close to what they would charge for cloud services, meaning lose lots of money.
  • Folks, this just ain’t gonna happen. I was told that people at NERC approached two or three major cloud providers early last year and suggested that they could just comply with the NERC CIP requirements and solve not only the problem of BCSI in the cloud, but also the problem of BCS in the cloud. As a carrot, they probably suggested that any cloud provider that did this would be assured of a lot of business from electric utilities. I assume those NERC staff members (and there were probably some asset owners involved) did this as an April Fool’s joke; I certainly hope both they and the cloud providers had a good laugh about it. If all the BCSI and BCS in North America were housed at one cloud provider, this might equal the provider’s revenue from say one medium-sized consumer marketing company. Don’t fool yourself into thinking this would be some sort of prize for the provider; it would be a huge headache, with little if any profit to show for it.
  • Again, the problem isn’t the requirement parts themselves, but the Measures associated with them, which can’t be changed without a new standards drafting team being convened, any more than the requirement parts themselves can be changed without that step. And in fact, this is the primary reason why the current SDT exists.
  • When the question of BCSI in the cloud started being seriously discussed – mainly in the Compliance Enforcement Working Group, a subcommittee of the NERC CIPC – a couple years ago, it seemed to me then (and it seemed to me until the webinar two weeks ago, when I finally learned what the SDT was proposing) that there are only two ways to fix this problem. One is to allow a NERC entity to point to the fact that their BCSI in the cloud is encrypted, as evidence of compliance with requirement parts like CIP-004 R5.3. The problem with using encryption as evidence now is that it doesn’t get the entity off the hook for compliance with the three requirement parts in CIP-004. Even though it technically makes it close to impossible for anyone unauthorized to see the BCSI, the requirement parts are about restricting access in the first place. The fact that the BCSI is encrypted doesn’t change the question whether access to it has been granted or removed.
  • The other way to fix the problem is to allow the cloud provider’s certifications (FedRAMP, SOC II or perhaps others) to constitute the evidence. The only question I had – until two weeks ago -- was which one of these ways the SDT would choose (and I thought I heard John Hansen, the SDT chairman, say at the December NERC CIPC meeting that they were leaning toward the encryption approach. However, I may have misunderstood what he said).
  • Fortunately, the SDT hasn’t put themselves in my straitjacket of how to address this problem, but has instead taken a much more comprehensive view of the problem of BCSI in the cloud than I was doing. Their draft solution to this problem can be found in the first draft of CIP-011-3, as well as the Technical Rationale for this; you can find these on the SDT’s web page on NERC’s site.
  • The drafting team’s efforts include a) modifying CIP-011 R1, b) creating a new CIP-011 R2 (and moving the current R2 to R3, although it has been revised as well), and c) moving the offending CIP-004 requirement parts into CIP-011, where they are addressed in the context of the new approach.
  • The current CIP-011-2 R1 requires the NERC entity to develop an information protection plan. However, it doesn’t require the entity to take any particular measures for cloud providers - that is, it allows entities to store BCSI any place they want, as long as their plan provides some sort of protection (of course it’s the CIP-004 requirement parts that effectively prohibit BCSI in the cloud, as just discussed).
  • The draft CIP-011-3 R1.1 reads “Process(es) to identify information that meets the definition of BES Cyber System Information and identify applicable BES Cyber System Information storage locations.” The part that I’ve emphasized is new. The SDT explains its significance in the Technical Rationale, where it says “This identification should be as follows: 1) Defined as a friendly name of the electronic or physical repository (thus protecting the actual storage location); and 2) The description of the storage location (e.g., physical or electronic, off-premises or on premises).”
  • In other words, a) You don’t have to show that BCSI is stored in a particular location at a cloud provider (which neither you nor the provider could ever do anyway) - you just have to give it a “friendly name” like perhaps the name of your cloud provider; and b) You have to describe the storage location as “physical or electronic, off-premises or on premises”, such as “electronic off-premises” for a cloud provider. So here’s one step toward making BCSI in the cloud “legal” in CIP.
  • The draft CIP-011-3 R1.2 reads “Method(s) to prevent unauthorized access to BES Cyber System Information by eliminating the ability to obtain and use BES Cyber System Information during storage, transit, use, and disposal.” This compares with “Procedure(s) for protecting and securely handling BES Cyber System Information, including storage, transit, and use”, in the current version, CIP-011-2 R1.2. The big change here is that, even if someone can gain access to BCSI stored in the cloud, they won’t be able to use it if it’s encrypted, so encrypting the BCSI (in transit to the cloud provider as well as at rest at the provider) will now be a legitimate method for protecting BCSI in the cloud (although I think the ‘and’ in “obtain and use” should be replaced with ‘or’).
  • So encryption is now allowed as evidence of compliance in CIP-011-3 R1. How about pointing to the cloud provider’s certifications? Is that also going to be allowed? Not to keep you in suspense - yes, it will be. The SDT did this in R1.4.
  • R1.4 is a risk-based requirement part, which requires the entity to conduct their own risk assessment to decide the best way to protect the BCSI they will store in the cloud, based on the risk that this poses. It reads “Process(es) to identify, assess, and mitigate risks in cases where vendors store Responsible Entity’s BES Cyber System Information.
    • 1.4.1 Perform initial risk assessments of vendors that store the Responsible Entity’s BES Cyber System Information; and
    • 1.4.2 At least once every 15 calendar months, perform risk assessments of vendors that store the Responsible Entity’s BES Cyber System Information; and
    • 1.4.3 Document the results of the risk assessments performed according to Parts 1.4.1 and 1.4.2 and the action plan to remediate or mitigate risk(s) identified in the assessment, including the planned date of completing the action plan and the execution status of any remediation or mitigation action items.”
  • In other words, the NERC entity can now perform a risk assessment of the cloud provider (the “vendor”), to determine if they have an appropriate level of security such that the entity is comfortable that any residual risk to BCSI stored with the provider is low. In other words, the entity might decide, based on an analysis of the BCSI they’re planning to store with the cloud provider, as well as the certification(s) held by the provider, that the level of risk of disclosure of the BCSI is low enough that it can safely be stored there – and they don’t also have to encrypt the BCSI, although of course that wouldn’t be a bad additional measure.
  • However, I’m sure there will be a lot of caveats to this. For one, just saying a cloud provider has e.g. a FedRAMP certification isn’t enough. The NERC entity needs to first identify what the specific risks are, and then determine whether the certification held by the provider adequately addresses those specific risks. And I also think the entity should consider additional cloud risks probably not covered in FedRAMP now, such as those revealed in last year’s Capital One breach (which I discussed in this initial post and this follow-on), and the two even scarier risks I discussed in this post in December.
  • However, I’m glad that the draft CIP-011-3 R1.4 doesn’t specify exactly what risks must be considered by the entity, any more than CIP-013-1 R1.1 does. These risks are too fluid to be baked into a requirement. As with the supply chain security risks in CIP-013, it would be great if there were some central body (NERC, the NERC Regions, NATF, etc.) that provided and regularly updated a comprehensive list of cloud provider risks that entities should consider. However, it would also be great if there were world peace and every child got a pony; don’t look for any of these very soon.[i]

To summarize, I thought originally that the BCSI SDT would have to choose between encryption and provider certification for the magic wand that will make it allowable to put BCSI in the cloud. However, I found out two weeks ago that the SDT – whose work I find very impressive – has figured out how to allow both options (and potentially others as well), through the magic of risk-based requirements. I want to say a little more about CIP-011-3 R1 and other risk-based requirements, but I’ll save that for a post in the near future. It’s coming to a computer or smartphone screen near you!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.

[i] I have stepped into the breach for my CIP-013 clients and developed – with those clients – lists of threats, vulnerabilities and mitigations for supply chain cyber risks to the BES. We’ve developed these from going through documents like NIST 800-161, NIST 800-53 and the NATF Criteria, as well as general random observations from news stories, etc. I’m regularly updating these – again with clients – and providing the updates to my clients, even after I’ve finished my project(s) with them.

Wednesday, January 15, 2020

The NATF Criteria, Part I

CIP-013 compliance is hard to figure out, I’ll freely admit. The main reason for this is quite clear: The standard reads more like an exercise in haiku than a mandatory standard with huge potential fines for violation. There are no more than five sentences in the entire standard, and it may be fewer than that.

In theory, this should be welcome news to people at NERC entities that are tasked with complying with CIP-013. After all, since a NERC entity can only be held for violation of the strict wording of the requirements, and since the three requirements give very little information about what must be done to comply with them (my favorite is R2, which reads in its entirety “Each Responsible Entity shall implement its supply chain cyber security risk management plan(s) specified in Requirement R1”), CIP compliance professionals should be jumping for joy at the idea that they finally can decide for themselves the best way to mitigate BES cyber security risks.

In practice, of course, just the opposite is happening with many people, as I discussed in this post last year. People who are used to the very prescriptive NERC CIP requirements are desperately seeking something…something!...that they can hang their hats on as they develop their CIP-013 plans. Something that tells them what should be in a good plan, and which the auditors are likely to be in agreement with come audit time.

This is why I and many others were happy when the North American Transmission Form put out their spreadsheet “Cyber Security Supply Chain Criteria for Suppliers” last summer. While this isn’t official NERC guidance, it does provide a very good set of questions that a NERC entity can ask of its suppliers (and like everything else in CIP-013, it could be useful for any electric utility anywhere, not just those in North America that happen to be subject to CIP-013) – which is another way of saying that the document identifies a number of risks that can be found at many suppliers.

Since CIP-013 R1.1 requires the entity to “identify and assess” supply chain cyber security risks to the BES, I’m sure some people consider the Criteria to be a complete listing of all the risks that need to be mitigated in R1.1. However, those people are mistaken. There are lots of risks that IMO need to be considered for inclusion in any CIP-013 plan, including some that NATF never intended to address, and some they may have intended to address, but missed.

So I consider the Criteria to be one among a number of documents (NIST 800-161, NIST 800-53, the DoE Cybersecurity Language for Energy Delivery Systems, ISO 27001, NISTIR 7622 and more) that should be scoured for threats, vulnerabilities and mitigations that are applicable to ICS and to CIP-013 in particular. Over the past year of working on CIP-013 with clients, I and my clients have compiled spreadsheets of these - as well as their interrelations; I just recently added the NATF Criteria to them (some were already in my documents, although many Criteria weren’t). I and my clients are constantly adding to, removing, and refining the entries in the spreadsheets (which I share among all of my clients, since there’s nothing in them that’s proprietary to one client).

What do the Criteria do, and what don’t they do?

There is one big trap that the NATF Criteria avoid, for the most part: Like all of the standards, CIP-013 applies to control systems, not systems that are used to store and/or process information. The biggest problem with almost all frameworks of cybersecurity risks (or of cybersecurity risks in general) is that they are focused mostly (or even entirely) on information systems. This means they focus a lot on threats to confidentiality (personal data privacy, web site security, lack of encryption of data at rest, protection of intellectual property, etc.), and very little on threats to availability, which is of course the number one concern with control systems. On the other hand, the NATF Criteria were written by people who understand the paramount importance of availability, so almost all of the Criteria identify risks that should definitely be considered by anyone developing their CIP-013 plan.

However, the Criteria don’t in any way cover the entire set of risks that should be considered – or even a representative sample of all the different types of risks. I don’t think the people who drafted the Criteria ever thought these would be considered to be the entire set, but I know some entities are considering them to be just that. Here’s what the Criteria don’t include:

First and perhaps most importantly, they don’t address any risks that apply to the entity itself. For example, FERC talked a lot in Order 829 about installation risks. FERC issued the order about seven months after the first Ukraine attacks were announced, and they focused on the attacks in the Order – pointing out that if the Ukrainian utilities hadn’t installed their HMIs on the IT network, it would have been much harder for the attacks to have succeeded. FERC wanted the new standard to ensure that the risk assessment that utilities do when starting a procurement always considers risks of insecure installation. And R1.1 specifically includes that word, to indicate installation is one of the areas of risk that must be included in the entity’s supply chain cyber security risk management plan.

And guess what? Installation isn’t a supplier or vendor risk; yet one rarely hears anyone in the NERC community (except for me, since I enjoy being the skunk at the garden party, in case you never noticed) talking about anything but vendor risks in CIP-013 – and I’m sure many if not most CIP compliance professionals would be surprised if you told them that they need to consider risks introduced by their own organization, not just by vendors.

I believe that most NERC entities do installations themselves, but even if they do sometimes have a supplier or vendor do that work, it’s always the utility’s responsibility to make sure it’s done properly, not the vendor’s. I really doubt any NERC entity – or any responsible organization, period – would just wave toward their data center or substation and tell the vendor “Just install the stuff there and don’t bother me with any of the details. I’ll check in with you in a few hours to see how you’re doing.”

And there are a lot of other risks that apply to your organization, not your vendors. For example, does your supply chain department have a policy of always buying products at the lowest possible price, so that when Mr. Kim’s Discount Routers in Pyongyang, North Korea sends them an email offering knock-off Cisco routers at incredibly low prices, they don’t hesitate a minute before placing the order? Probably not, but are there specific guidelines for when cyber risk needs to be considered over price and when it doesn’t (if discount paper clips were being offered, this deal might be worth considering)? This is another example of a risk that applies to your organization, not to your vendor.

But even when we get to true supplier/vendor risks, there are a number of important risks that aren’t listed in the Criteria at all. These include:

  1. Risks due to use of open source software;
  2. Risks due to third party software that is embedded in the software package that the utility buys (I read recently that 80% of the code in commercial software was really written by third parties. This sounds pretty high, but even if it’s 50%, that’s still a serious issue. What’s even worse is that the developer may not even know what parts of their code are due to third parties – and they almost certainly don’t know which parts of those third parties’ code are due to fourth parties, etc. The Heartbleed vulnerability in OpenSSL was a vivid example of the problem: Many software suppliers had no idea whether their code was vulnerable or not, since so much of it came from third parties);
  3. Lack of multi-factor authentication for the vendor’s own remote access system – which, according to DHS’ briefings in the summer of 2018, allowed the Russians to penetrate over 200 vendors to the power industry;
  4. Lack of adequate security in a software supplier’s development environment, or in a hardware supplier’s manufacturing environment – including adequate separation of development and IT networks, and lack of multi-factor authentication for access to the development environment; and
  5. Inadequate training and other measures to mitigate the risk of ransomware, and measures to prevent any ransomware attack from being automatically spread to your network (including isolation of machines used for remote access to your network).

This is a lot. Do you have to mitigate all of the risks you identify? No. CIP-013 is the first NERC standard (and certainly the first CIP standard, with CIP-014 being a possible exception) in which the NERC entity is explicitly allowed to accept risks. And keep in mind that you probably have the majority of these risks – both those that apply to your vendors and those that apply to you – already mitigated by various policies and procedures that either you or your vendors have implemented.

But you need to try to consider all of the important supply chain risks, so that you can concentrate your resources on mitigating those risks, rather than unimportant ones. Remember, you’re not required to spend any particular amount of resources complying with CIP-013. But you should always aim to mitigate the most possible risk, given the resources you have available.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.

Friday, January 10, 2020

The scariest supply chain attack yet

I didn’t mean for this to happen, but it turns out that this is my third post this week about cyberattacks that were revealed to be supply chain ones. Sunday’s post described a likely attack during the 2016 election, while Monday’s was about an event at the London Stock Exchange last August that’s now suspected to be a cyberattack that – of course – came through the supply chain.

My Supply Chain Attack of the Day (I hope I don’t start doing these every day!) was called to my attention by Kevin Perry, who now that he is (semi-)retired seems to have a lot more time to delve into news articles that I either don’t see in the first place or don’t bother to read because they don’t seem interesting.

And in my defense, this article at first glance certainly seems to be one for the dustbin (or the bit bucket, to be more accurate). It describes a ransomware attack on a school district in Michigan. I’m sure that, if I even saw this article in the first place, I would never have read it, since a) it’s about a ransomware attack, and we all know that the only question every day is how many new ransomware attacks will be reported that day; and b) it’s on a school district, when it seems almost all ransomware attacks nowadays are on school districts, municipalities, mosquito abatement districts, etc.

When Kevin sent it to me, he pointed out that it was a supply chain attack, yet I didn’t even pick that up in my first reading. It was only when I went back over the article that I found this sentence in the second paragraph: “The malware…was found to have entered systems through a network connection with the district’s heating and cooling service provider.” That’s the only word about the supply chain connection.

And even then I didn’t see this as a big deal at first. After all, HVAC vendors seem to be a tried-and-trusted vector for supply chain attacks, with the most notable example being the Target attack of 2013. But then it struck me that this was the first supply chain ransomware attack that I’d heard of. And then I started thinking of what this means.

Let’s say your organization, having seen the devastation that ransomware can unleash, has spent lots of time and money in doing all the right things – especially awareness training – to prevent ransomware attacks. But then some yahoo at one of your vendors clicks on a link promising untold riches and BAM! – the dreaded screen appears, telling him he needs to pay $10,000 (or whatever) in Bitcoin to a particular address (and even providing a number to call for support, since the ransomware industry seems to lead all industries in providing excellent customer support. I wouldn’t be surprised if they’ve received some awards for this).

However, the worst part is this screen starts popping up on other computers in the vendor’s office that are networked with the yahoo’s computer, including a computer of someone who is at the time remotely connected to your network (alas, without an Intermediate System on your side to prevent infection!). And the next thing you know, computers on your own network start to be infected, despite all of your best efforts to prevent this from happening.

Hopefully, for your organization this won’t turn into one of those huge ransomware disasters that befell the cities of Baltimore and Atlanta. It turns out the school district in Michigan had the right procedures in place – and had tested them – so that the damage was remarkably limited (and they didn’t pay any ransom).  But the point remains: Ransomware is without much doubt the most serious cyber threat today. The idea that, despite all of your organization’s diligence in trying to keep it out, it could arrive directly from a vendor and cause just as much damage as if the initial breach had happened on your network, is something I find pretty scary.

So since this blog is for electric utilities, not K-12 school districts, what are the lessons to be learned? First, since the ransomware would have been blocked by an Intermediate System like what’s required by CIP-005 R2, it wouldn’t have directly penetrated any ESP (of course, this applies just to Medium and High impact BES assets. Lows don’t have ESPs and therefore aren’t required to have Intermediate Systems to protect interactive remote acess). But if your utility allowed vendor access into a system on your IT network, and didn’t have an IS in place there, you might have had an IT network infection. And obviously, that’s not very pretty. So the first lesson is you should strongly consider protecting remote access to your IT network to the same degree as to your OT network.

But there are a couple other lessons that are much more concerning:

First, as you probably know, an Intermediate System is just for Interactive Remote Access, not for machine-to-machine access. Let’s say a vendor’s system is connected into your OT network (which might be an ESP, or might not – but which is critical in either case) at the same time that the ransomware spreads on that vendor’s network. Guess what? The vendor’s system may itself get infected and spread the ransomware to your OT network. And then there might be a huge problem. I have never heard of a ransomware attack on an electric utility’s OT network, but it wouldn’t be pretty at all if it happened. Even worse, I have no idea how you could prevent this, other than banning all machine-to-machine access to your OT networks.

The second lesson is also quite scary, all the more so because it seems it may already have happened, and at a major utility. In a recent year, a major utility reported a malware attack on its IT network which required a big effort to eradicate from the network, but which was ultimately successful. The utility stressed that there hadn’t been any impact on electric operations.

However, a friend of mine who used to work for a small utility in the big utility’s control area recently told me this statement wasn’t entirely true. It’s true that the infection – which sounds like it was ransomware, not ordinary malware – didn’t actually spread to any operational networks; this includes the control centers, which are often managed by IT, since the devices in a control center are IT ones, not OT ones (i.e. they’re Intel architecture and Windows or Linux O/S, for the most part).

But the IT network was thoroughly infected, and the IT people fighting the infection decided that every single system (over 10,000) needed to be wiped clean and rebuilt from the previous day’s backup (of course, there was no question that good backups were available). But then they went further: Even though there was no evidence the Control Centers had been infected, IT decided it would be far too risky not to wipe and rebuild all of the systems in the Control Centers. If just one of them turned out to be infected, the ransomware could have spread back into the IT network, once it was brought back up.

The upshot was every Control Center server and workstation had to be brought down and rebuilt from backup. This means for a period of up to 12 hours, some (and during a part of that time, all) of the Control Center’s operations had to be conducted the old-fashioned way: by phone. Of course, this is something Control Centers rehearse all the time, and there doesn’t seem to have been any direct power system impact.

You might wonder (as did I) whether the utility identified this to be a Reportable Cyber Security Incident and reported it to the E-ISAC. I’d say that’s one for the lawyers. Monitoring and Control are BES reliability functions, so if those were really lost, then it should have been reported. But it sounds like the Control Center staff were able to keep things running – and there’s no reason at all to believe that the lights went off anywhere because of this. So in the end, I’m not particularly concerned whether this was reported or not.

But here’s the lesson: Often, people in the industry (especially if they’re involved with cybersecurity and NERC CIP) tend to think that the wall between IT and OT is impenetrable, at least when the OT network is a Medium or High impact BES asset and subject to all of the CIP requirements. That is, an infection on the IT network will literally never get through to the OT network, and OT doesn’t have to concern itself too much with what goes on on the other side of the wall. No matter what happens over there, we’ll be just fine here, and the ratepayers will never see any problem.

Well, guess what? The OT network in this case (the Control Centers) was never penetrated, yet it ended up being effectively down for many hours! The results would probably have been the same if the ransomware had actually penetrated the Control Centers. The lesson is that OT needs to pay attention to what’s going on on the IT network, because a disaster over there will eventually come over here. And someday when the CIP standards are all written as risk-based ones, I think they need to include IT neworks (not individual IT systems) in some way, to address exactly this sort of situation.

As the great poet John Dunne wrote, “Ask not for whom the bell tolls. It tolls for thee.”

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.

Monday, January 6, 2020

Another day, another supply chain attack?

This morning, the Wall Street Journal published an article – which fortunately seems to be freely available here – saying that UK government agencies are investigating whether a trading disruption at the London Stock Exchange in August may have been caused by a cyberattack. The disruption was indisputably caused by a problem in the configuration of the exchange’s software, perhaps after an upgrade had been applied. The problem delayed the opening of the exchange for 1 ½ hours, and was the worst outage the exchange has suffered in eight years.

Of course, many people might wonder why this is being called a cyberattack, since it was related to a software upgrade. Upgrades go bad all the time, and nobody considers them to be a cyberattack (indeed, the exchange’s initial report on the incident made no mention of an attack). There is evidently some evidence that leads to the idea that the code may have been tampered with while it was being developed – and the article says specifically that the U.K.’s Government Communications Headquarters (which monitors critical national infrastructure) is examining time stamps in the code, presumably because there’s some anomaly with them.

The WSJ isn’t saying this is a cyberattack, and I’m certainly not saying so, either. But this is another good indicator that the risk that a bad actor will penetrate a software developer and plant malware in the code, as happened with Delta Airlines in 2018 and Juniper Networks in 2015 (an attack that was enabled by a flawed encryption algorithm, perhaps implemented due to pressure from the NSA, just to thicken the plot a little more), isn't negligible.

As you develop your supply chain cyber security risk management plan for CIP-013 compliance (or if you develop it just because it’s a good idea, if you don’t have to comply with CIP-013), keep in mind that this is a risk you should consider for mitigation. Of course, it would be vast overkill to require a code review of all software you acquire, as a way of mitigating this risk. But it’s certainly not out of the question to include one or two questions about the security of a supplier’s software (or firmware) development lifecycle, when you assess them using a questionnaire or other means. If you find out their development lifecycle leaves something to be desired as far as cybersecurity goes, you can make the decision whether to stop buying from them, or you can try to persuade them - through means such as contract language or meetings with their management - to improve in this regard.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.

Sunday, January 5, 2020

Take it from Vlad: “For best results, always hack through the supply chain!”

Last week, Kevin Perry, former Chief CIP Auditor of SPP Regional Entity, sent me the link to this article from Politico, which pointed out that Russia probably came a lot closer to directly hacking the 2016 election than most of us thought – in fact, it seems they might have directly affected the results (perhaps not the outcome itself, although there’s no way to know that) in at least one locality - Durham, NC.

This is of course quite scary, since if people don’t trust elections to provide accurate counts, all sorts of havoc could ensue (and that is, of course, exactly the purpose of the Russian election hacks!). However, Kevin was most interested in the fact that this successful or near-successful attack came through the supply chain – specifically a vendor in Florida, VR Systems, that provides software for local governments nationwide.

The article (and a predecessor article linked in it, which fills in details not in the main article) says that VR Systems was probably compromised through spear phishing emails, but that in itself was just half of the attack. It seems that VR was using remote access to troubleshoot software problems (including the problems that appeared in Durham a few days before the election), and it’s possible that the hackers, who already were present in  VR’s network, utilized that connection to get into Durham’s systems, and perhaps those of other governments as well.

Of course, having direct remote access into critical election systems is the equivalent of having direct access (i.e. not through an Intermediate System) into an Electronic Security Perimeter – it’s something that should never have been done in the first place, and VR wasn’t supposed to be doing it now (although there aren’t any regulations governing election system cybersecurity, of course).

However, the story here is the vector of the attack. A lot of people talk about hacks on the power grid as being direct assaults on the firewalls of major utilities. Of course, these assaults are occurring every second. But they’re not getting through, and the bad guys – especially the Russians (or am I wrong? Are they now good guys and I’m the bad guy? I can never be too sure about these things…) – have figured out that they’re wasting their time with any further assaults on the front gate. Instead of trying to take the castle by storm from the front, it’s much better to go around to the rear in the dark of night, and break in the door that’s there for the tradesmen to use.

In a conversation with me, someone recently pointed to the CRISP system as providing great security for utilities. It certainly provides great security against frontal assaults, just as the Maginot Line provided impenetrable security against German frontal assaults during World War II. Of course, the French had nothing protecting the border with Belgium and the Ardennes Forest, and that’s how the Germans came in. In the same way, anyone who thinks CRISP is all we need to protect the North American power grid from cyber attacks will be pretty surprised when the Russians break through on the supply chain front (and it seems they already have, if the FBI, CIA and NSA are to be believed. Of course, what do they know about anything?).

And look at other Russian attacks, including:

  1. The NotPetya attack, the most devastating cyber attack ever, which started off as a supply chain attack on Ukrainian companies (in fact a huge percentage of them) that used a certain supplier of tax software.
  2. The DHS briefings in July 2018, at which Jonathan Homer said that some number (anywhere between three and two hundred, depending on who you talk to. Of course, if it were just one, that alone would be huge) of US utilities had been penetrated at the control system level, and the Russians could have caused outages if they wanted to – and presumably they planted malware so they could come back later if they decided it was time for an outage. These attacks came completely through vendors, who had been penetrated through their (i.e. the vendors’) remote access systems, as well as phishing emails.
  3. The Wall Street Journal article of January 2019, describing in great detail – and naming names – how the Russians had used phishing emails to gain footholds in vendors, and thence to penetrate electric utilities. While the reporters didn’t themselves cite instances of penetration of utility control networks, the article quoted Vikram Thakur of Symantec as saying at least eight utility control centers had been penetrated.
While the most recent WSJ article – by the same reporters, Rebecca Smith and Rob Barry - describes attacks that may have occurred in some cases through phishing emails sent directly to utilities, it’s certain that supply chain attacks are still going on. And the new Politico article confirms that the Russians like supply chain attacks for election infrastructure as well. Why change tactics when what you’re doing now is working?

So listen to your good friend Uncle Vladimir: When you really want to get the hack done and cause damage, there’s no better vector than the supply chain! He’s a man who should know…

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. My offer of a free webinar on CIP-013, specifically for your organization, remains open to NERC entities and vendors of hardware or software components for BES Cyber Systems. To discuss this, you can email me at the same address.