Saturday, August 31, 2019

The cost of regulatory uncertainty


This isn’t news to most electric power industry participants, but I’ll say it anyway: Uncertainty about whether an entity will be held for NERC CIP violations if they put information on BES Cyber Systems (BCSI) in the cloud imposes a big cost.

I received a good example of this when Mike Prescher of Black & Veatch, whom I know from the Supply Chain Working Group, wrote in about a problem they have. As you probably know, B&V executes a lot of large projects for utilities and other industries. To manage these projects, they have a number of tools that they’ve built in the cloud. For example, on telecom modernization projects – which always include a big enhancement of the customer’s telecom security – a project can take days instead of weeks, and weeks instead of months, when they manage the projects with these tools.

But some of these projects are for the power industry, and I’m sure you can anticipate what I’ll say next: In those cases, they don’t currently use those tools because they’re worried about their clients being cited for CIP violations. Mike asked me to clarify what their options were, and I replied (in line with this post):

  1. There is nothing in the current CIP requirements that prohibits keeping BCSI in the cloud. The Information Protection requirement, CIP-011 R1, requires a program for protecting BCSI in “storage, transit and use”. As far as this Requirement is concerned, it doesn’t matter where the BCSI is, as long as it’s protected.
  2. The problem comes with four requirement parts in CIP-004, which govern controls on people who access electronic or physical locations of BCSI. And even then, the problem isn’t so much with the requirement parts themselves, but with the evidence required. There is simply no way a cloud provider could provide that evidence (which, as with most CIP requirements, needs to include documentation that the requirement part was followed in every instance, for every person), without abandoning the cloud business model altogether, and becoming something like an outsourced data center, where electric utilities store OT servers for convenience, but maintain full management of them.
  3. So in order for BCSI to be officially allowed in the cloud, there will need to be changes to some of the Requirements, or Measures, or both in CIP-004. NERC is in the process of putting together a new drafting team to draft whatever modifications are required, but of course those are now years away from coming into effect. So the problem remains for the foreseeable future: Since the NERC entity can’t provide acceptable evidence of CIP-004 compliance when BCSI is stored in the cloud, any entity that has BCSI in the cloud will be open to CIP-004 violations. I told Mike that any entity who is looking for regulatory certainty should keep their BCSI out of the cloud.

There are two good reasons why more than a few NERC entities are now storing BCSI in the cloud. First, until recently NERC was planning on moving a huge trove of CIP compliance data (mostly BCSI, of course) to the cloud, in their Align project. That project’s on hold now for other reasons (it seems the company that develops the GRC tool they were going to use for it was bought by a Chinese entity – what could possibly go wrong with that?), but if they had gone ahead with the project, it’s hard to see how every NERC entity wouldn’t have felt completely free to move all of their BCSI to the cloud. How could they ever have been cited for a violation when NERC itself was the biggest violator? But even with that project on hold, it will certainly be very hard for NERC to ever come down hard on anyone storing BCSI in the cloud in the future, when they were very happy to do it themselves.

Second, there’s the example of virtualization. It’s no more “legal” to utilize VMs, VLANs, or storage arrays within an ESP than it is to store BCSI in the cloud, yet I doubt there’s any NERC entity today with High or Medium impact BES Cyber Assets, who would hesitate to use virtualization because of fears of getting cited for non-compliance with CIP – in fact, all of the NERC Regions talk freely about how to do virtualization properly in an ESP, and NERC itself has put out at least one document that discusses that subject. NERC entities have been virtualizing in ESPs for a long time; I know at least one entity that passed a CIP audit (probably the “first 13” spot check) in 2010 for their virtualized Control Center.

Does this mean that NERC entities need to just be patient and wait ten years before BCSI in the cloud is widely accepted by NERC auditors and entities? I certainly hope it’s not that long, but it’s certainly going to be 2-3 years before doing this is officially “legal”. Until then, NERC entities have the choice of crossing their fingers and putting BCSI in the cloud, or avoiding any uncertainty and not doing that. But as Mike points out, there are lots of costs to not putting BCSI in the cloud. And there will inevitably be more costs as time goes on.

Will we ever reach the point at which NERC entities snap and demand of NERC and FERC that they fix this problem? I doubt it, simply because it’s too easy to simply allow your BCSI to be put in the cloud, especially when you know other entities in your Region are doing it (and I’m sure this is being done in all Regions – in fact, I attended a forum on using a certain cloud-based workflow tool for CIP compliance last week. There were about ten entities there, from three Regions. There are a good number of other entities using the same tool, in these and other Regions). If nothing is done to change CIP-004, I think BCSI-in-the-cloud will become like virtualization – close to ubiquitous, but technically still not allowed by CIP.

At the beginning of my first post in this series on the cloud, I pointed out that there are two cloud questions for CIP: putting BCSI in the cloud and putting BES Cyber Systems themselves in the cloud (e.g. outsourced SCADA). I ended up doing five posts on the first question; I’m concluding with the statement that the BCSI problem is essentially solved, since NERC entities are doing it now, and they’re passing audits – even if it may be years before this is totally “legal”.

My next post (I doubt it will be the last) in this series will be on the second question. That is much harder, and the outlook is much darker for that question. Currently, there’s no way to put BCS in the cloud and be anywhere close to 100%, or even 50%, compliant with CIP. And I don’t see that changing until the CIP standards are almost entirely rewritten.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. And Tom continues to offer a free two-hour webinar on CIP-013 to your organization; the content is now substantially updated based on Tom’s nine months of experience working with NERC entities to design and implement their CIP-013 programs. To discuss this, you can email me at the same address.

Tuesday, August 27, 2019

In case you wondered whether CIP-013 is just a compliance exercise…



A couple weeks ago, Tom Hofstetter of NERC sent around a link to the Supply Chain Working Group of the NERC CIPC. The link was to a story about a lawsuit filed by Delta Airlines, against [24]7.ai, Inc, the supplier of the chatbot software on their (and other companies’) web site. They blame that software for a 2018 breach (which was immediately reported by Delta) that resulted in the loss of the credit card information of 800,000 to 850,000 Delta customers. At around the same time, Sears had reported a breach of the same software on their web site.

The 2018 stories on this breach focused on the consequences of the breach, but didn’t say anything about how it came about. That’s understandable, since nothing had been publicly revealed about that. But Delta’s lawsuit makes it clear that the cause of the breach was a classic supply chain attack: The software supplier’s development process was compromised, and a back door was inserted. Delta’s lawsuit makes the case that it was the vendor’s poor security practices (which belied the assertions it had made to Delta in their proposal) that led to the breach.

I’ll bet a lot of the people who read this post are currently working (or will be soon) on their CIP-013 supply chain cyber security risk management plan. While this breach obviously doesn’t involve the BES or an electric utility, it still furnishes a good illustration of how your plan can help you mitigate real supply chain risk.

CIP-013 R1 has two parts. R1.1 requires the entity to develop a plan to “identify and assess” supply chain cyber risks (unfortunately, the drafting team left off the word “mitigate”, but that is also clearly required. CIP-013 makes no sense – and FERC would never have ordered it in the first place – if all it required was just identifying and assessing risks, but not doing anything about them). R1.2 lists six objectives the entity has to meet in their plan. These are mitigations for six (actually eight) specific risks. While the NERC entity can choose whether to mitigate or accept the risks it identifies in R1.1, in R1.2 there is no choice – the entity has to mitigate all of these risks (although they’re given lots of leeway for deciding how to do that).

Since none of the parts of R1.2 deals with software development, this is a risk that should be considered under R1.1. Again, R1.1 requires you to “identify, assess (and mitigate)” supply chain cyber risks. There’s only one problem with this wording: There are probably a huge (if not infinite) number of such risks. Do you need to identify, assess and mitigate all of them?

If you happen to have an infinite budget for supply chain cyber risk management at your disposal, I do recommend that you try to mitigate all of these risks, although given the relatively short human lifespan, you may want to make provision for a successor, a successor to that successor, etc. However, there’s no real problem with doing that, since you do after all have an infinite budget.

But let’s say you live in the real world and you don’t have an infinite budget – in fact, it’s probably a lot less than you wish you had. Given that it’s unlikely you’ll be able to get more (at least for a year or so), you should try to make sure that you spend your resources – dollars and time – as effectively as possible. And what does “effectively” mean in this case? It means you mitigate the most risk possible, given your available resources. How do you do that? The three steps are identify, assess and mitigate.

Identifying Risks
First, you identify risks that you believe are significant. And what’s a significant risk? One that you think has some chance of actually being realized. Consider the risk that a hurricane will flood a Medium impact substation, and in the process destroy a BES Cyber System that has been in service for years and for which the supplier has gone out of business – leaving you in a big pickle for finding replacements.

Is this a significant risk? If the substation is in the middle of the desert in Arizona, probably not. But if it’s near the South Carolina coast, it probably is significant. So a utility in Arizona won’t even consider this risk, while one in South Carolina probably will. Since both utilities – in this illustration – have to comply with CIP-013, they will both look for risks they consider significant and put those on their list. It would be good if CIP-013 itself provided a list of significant supply chain security risks to the BES, but it doesn’t (mainly due to the fact that FERC gave NERC just one year to develop the standard, get it approved, and send it to them). However, NATF last week released a set of “Criteria” for BCS vendors, which I think at least provides a good starting point for vendor risks (you can get it by going here). Note that vendor risks aren’t the complete universe of supply chain risks; you also need to identify risks that come through your own organization (e.g. the risk that you would buy a counterfeit BCS component that would contain a vulnerability that could be exploited), as well as from external sources such as the hurricane just discussed.

Assessing Risks
Once you have a list of significant risks, are you finished? Should you then roll up your sleeves and start mitigating each one? I don’t recommend it. It’s likely you’ll have a big list, and mitigating even all of these (for instance, the NATF document alone lists 68 “supplier criteria”) won’t be easy, especially with your limited time and budget available for supply chain cyber risk mitigation.

You now need to assess these risks by assigning each one a risk score, based on its likelihood and impact. And once you’ve done that, you need to list them all in a spreadsheet, ranked from highest to lowest risk score. Then you should choose the highest risks to mitigate. Where do you draw the line? You estimate your resources and then choose the risks that you have the resources to mitigate. But it should always be those that pose the highest risk for your entity’s BES assets.

By doing that, you’ll be assured of mitigating the most possible risk, given your available resources. You could always choose some lower risks to mitigate and ignore some higher risks, but this will only assure that you’re not getting the highest return for your risk mitigation investment. For example, the utility in Arizona could spend a lot of money protecting their substations against flooding during hurricanes – but wouldn’t it be better if they chose some risks that are much more likely to be a problem to them?

So how does the Delta breach fit in with this schema? Frankly, maybe six months ago I would have assigned a fairly low likelihood – and therefore a low risk score - to the risk that someone could penetrate a supplier’s software development process and insert a backdoor in the product (and I would have done this, knowing full well about the Juniper breach of 2015); and I would probably not have objected to a client telling me they didn’t think this was a risk worth mitigating. I might have even accepted the argument that surely, after the Juniper breach, software developers had tightened up their development environment controls, so that it would be hard for an attack like that to succeed again.

Obviously, the Delta breach shows that this particular type of supply chain attack is alive and well. So this may be a risk that many NERC entities will decide they should mitigate.

Mitigation
And how do you mitigate a risk like this? You obviously can’t force a supplier to have good security, but you can certainly ask them to commit to it – whether in an RFP response, contract language, a letter, an email, or even a verbal conversation. But that’s never enough. Even though Delta’s supplier committed in their proposal to having good security practices (in fact, they said they had them in place already), they just didn’t do it. You will need to regularly assess the supplier, usually with a questionnaire, but – if you have reason to suspect they won’t tell you the truth – you might have to do an audit (your contract should always permit you to do this).

But sometimes even that isn’t enough. In Delta’s case, the supplier had given then further assurance of their security (in a GDPR compliance attestation) even after the contract had been signed – at the same time that they were developing the software for Delta in an insecure environment! This is why you need backup mitigations. One would be to do vulnerability assessments on any software or hardware (firmware) you purchase, before you install it – if you think the risk justifies doing this (and I know at least one utility that does this for anything they purchase). Another would be to require – in the contract or RFP – that the supplier do a vulnerability assessment themselves, before they ship any product to you. Obviously, Delta’s supplier didn’t know about the backdoor.

Vulnerabilities
For any risk to be realized (although I prefer the word ‘threat’ here), there must be at least one vulnerability that enables it to be realized. This means that, in order to mitigate the risk, all of the vulnerabilities need to be mitigated, since even one open vulnerability can enable the risk to be realized. Think of the risk of a thief breaking into your house; your doors and windows are potential vulnerabilities that will allow that to happen. If you have all of your doors and windows very well secured except for one door that’s wide open, it’s likely you will be robbed. You have to mitigate every vulnerability, in order to mitigate the risk itself.

In the lawsuit, Delta provided a list of vulnerabilities that the supplier hadn’t mitigated, which could have allowed the breach to happen: “allowing numerous employees to utilize the same login credentials; did not limit access to the source code running the [24/7] chat function to only those individuals who had a clear need to access that code; did not require the use of passwords that met PCI DSS…standards; did not have sufficient automatic expiration dates for login credentials and passwords…; and did not require users to pass multi-factor authentication prior to being granted access to sensitive source code.” In other words, these were all open doors that made it much more likely the supplier’s development process would be breached. These are all items that should be addressed with the supplier, through contract language, letter, phone call, email, carrier pigeon, etc. While you should have a general requirement for a secure software development lifecycle, you should also specifically require controls like the one Delta lists.

Supplier Incident Response
Perhaps the most outrageous part of the supplier’s behavior was that it took them five months to notify Delta of the breach, and even then it was only through sending LinkedIn messages to a few Delta employee contacts. In fact, it seems that, as of the lawsuit’s filing, the Supplier still hadn’t given formal notice to Delta. It’s a good idea to require suppliers to create a Supplier (Vendor) Incident Response Plan. This will detail exactly how a supplier will handle an incident like this. Of course, you should have the right to review that plan and suggest changes you think are necessary.

The NERC CIPC Supply Chain Working Group is putting the final touches on a white paper of guidelines on vendor incident response plans. I hope it will be posted to the NERC web site soon, along with five other papers that are developed and just need final approval.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. And Tom continues to offer a free two-hour webinar on CIP-013 to your organization; the content is now substantially updated based on Tom’s nine months of experience working with NERC entities to design and implement their CIP-013 programs. To discuss this, you can email Tom at the same address.

Wednesday, August 21, 2019

Is this the cloud’s Tylenol™ moment?



My last post was prompted by a Wall Street Journal article on the Capital One breach that pointed out that Paige Thompson – according to prosecutors – stole “multiple terabytes” of data from 30 companies. It also pointed out that Ms. Thompson “displayed a high level of technical knowledge of the inner workings of Amazon’s cloud”. And I had already pointed out (based on the first WSJ article on her, which was the main inspiration for this previous post), that she stated in an online forum that lots of Amazon customers make mistakes in configuring firewalls, and specifically pointed to misunderstanding Amazon’s “metadata” service as a root cause of most of those mistakes.

My last post ended on a pretty pessimistic note, suggesting that probably the only way that Amazon could completely prevent technical staff who leave on bad terms from taking out their frustration on Amazon’s customers (and Capital One is now the poster child for this) is to kill them all (i.e. “terminate with extreme prejudice, as the CIA agent told Martin Sheen at the beginning of Apocalypse Now). As we all know, this is frowned on as a poor practice in most business schools, and also could land Amazon in some legal hot water - so this really isn’t a good option. I came to the conclusion that there is no other good option based on the following considerations:

  1. This isn’t an insider problem, of course. Ms. Thompson was no longer an insider when she perpetrated her attacks, having been fired by Amazon in 2016. I’m sure her credentials had been thoroughly revoked, etc. It wasn’t for lack of following normal best practices that Amazon allowed this to happen.
  2. In attacking these 30 Amazon customers (and I’ll admit that the more recent WSJ article didn’t say these were all AWS customers, probably because the prosecutors didn’t say so, either), she exploited the inside knowledge she gained while working at Amazon (that was definitely true for the Capitol One breach, as well as at least the two others mentioned in the first WSJ article).
  3. You might be tempted to say “Well, IT staff are fired from companies all the time, and they often take with them insider knowledge of those companies’ security controls or lack thereof. Some of them probably do use that knowledge to hack into their former employers, but this doesn’t seem to be an unmanageable problem. We’ve been living with it for years.”
  4. That’s true, but what’s different when you’re talking about someone who was fired from a cloud provider is that the knowledge they have can be used to attack many customers of that provider, since they all have to configure their defenses in the provider’s environment, and they may not understand the environment well enough to do that safely. If Paige Thompson had worked for Capital One and been fired, whatever knowledge she had about C1’s defenses would be highly unlikely to be useful in attacking any other reasonably well-defended company, let alone 30 of them. That’s what makes the fact that Amazon is a cloud provider so significant.
  5. And even if Ms. Thompson gets thrown into the deepest dungeon in Seattle for life plus 50 years as a deterrent to other cloud ex-insiders who would like to follow in her footsteps, that doesn’t stop those ex-insiders from monetizing their insider knowledge by selling it. I’m sure the various hacking groups (some government-sponsored) in places like Iran, North Korea, Russia and China would pay good money for that knowledge (and the price they would pay has now gone way up, given that Ms. Thompson has shown how it could be applied to reap big rewards – although she didn’t choose to go that route).
In other words, this is a potential problem on a scale not seen before in the cybersecurity world. Compared to this, ransomware hackers who make maybe a few thousand (or a few hundred thousand if they’re lucky) on each score are playing penny-ante poker. Ms. Thompson seems to have been uninterested in monetizing the data she had stolen, so it’s possible the impact of her exploits may be fairly limited. However, others who come after her will be much more interested in the filthy lucre side of things, and therefore much harder to detect (presumably they won’t brag about what they’re doing on online forums) and stop. How can this problem at least be controlled, if not solved?

The reason there were at least 30 Amazon customers (and definitely many more who Paige just never had the time to get around to hacking, there being only so many hours in the day for a busy hacker) who were such easy prey for Ms. Thompson isn’t of course that their security staff members were without exception complete idiots, but that the Amazon network environment is clearly far less understandable than it should be, for people with reasonably good security chops who are assigned the task of configuring their organization’s firewalls in the Amazon cloud. And as I pointed out in my last post, Amazon – after first putting the entire blame for the Capitol One breach on C1 – now seems to be admitting that they have to educate their customers better, as well as make changes to their “cloud subsystems” that will make these breaches less likely to happen.

Which brings me to Tylenol. I’ll forgive those of you who weren’t avidly reading the papers (and especially in the Chicago area) in 1982 if you don’t know much about this crime, in which someone spiked a number of bottles of Tylenol capsules in stores in the Chicago area with cyanide, killing seven people, but I can assure you it was a very big deal back then. Of course, the perpetrator of these murders was a piker compared to present-day mass murderers, but hey – he just didn’t have the tools that guys (and they’re all guys. I have yet to hear of a female mass murderer in the US, although there have been a number of female suicide bombers in other countries) have nowadays.

However, the bigger story wasn’t the crime itself, but Johnson and Johnson’s reaction to it. Even though it was clear that there was nothing they’d done wrong, other than not anticipating that someone would do such a thing, J&J responded in a way that has made it a textbook example of how to respond to a security threat that could potentially send your whole business down the tubes. They halted all production and advertising of Tylenol and recalled all 31 million bottles that were in circulation. Then they advertised by media and loudspeakers (in Chicago) that nobody should consume any Tylenol products they had in their possession, and they would replace any capsules with solid Tylenol pills. J&J didn’t start selling the product again until they had developed tamper-resistant bottles like the ones we’re all familiar with today.

In other words, J&J decided to treat this as the maximum problem that it could have been, even though another manufacturer might have moved heaven and earth to show this was just a local problem (only eight bottles were ever discovered that had been tampered with, all in the Chicago area. Several other people were killed in copycat attacks). Those manufacturers would probably have told people outside of Chicago that the products they currently had were safe (as they indeed were, with 99.9% certainty), and they would have probably moved with much less alacrity to provide tamper-proof bottles.

J&J realized that, while the alternative approach might have worked, and would certainly have been far less costly, there would probably always have been a cloud (no pun intended – OK, not intended until half a minute ago, anyway) hanging over Tylenol. By taking this action, they ensured that Tylenol remained the huge seller that it is today. In fact, they probably ultimately increased their sales, because there was widespread public approval of J&J and admiration for their actions.

OK, by now you’re saying “Yeah, we get it. Amazon needs to act like J&J. But they can’t put tamper-proof packaging on the cloud, so what can they do to emulate what J&J did?”

Here’s what I think Amazon should do. This is really far less radical than what J&J did, but it might possibly be enough to decisively address this problem:

  1. Even though I’m sure they’ve always offered an option for customers to pay them to handle security, they obviously didn’t warn them sufficiently of the dangers they might face if they handled security for themselves. They now need to come clean and admit that. What they did was something like what Boeing did – although on a very different scale, of course - when they assumed, in designing their MCAS system for the 737 MAX, that pilots would always know exactly what they had been trained to do in every emergency, even when a bunch of different emergency lights and sirens were going off at the same time. Amazon has the chance to keep this problem contained to Capital One, and that should be their goal.
  2. They should offer free security consulting to every one of their customers – going over the configuration of their firewalls and other security measures, to make sure they are properly configured.
  3. They should provide training – for free, of course – to all of their customers, covering the intricacies of their infrastructure (or “cloud subsystems”) - and especially the metadata system - inasmuch as that can have an effect on the customers’ security measures.
  4. Most importantly, they need to re-engineer those infrastructure systems with the objective that, as much as possible, securing customer cloud networks won’t be very much different from securing networks installed at their customers’ own data centers. And any differences should be thoroughly documented, rather than being left to the Paige Thompsons of the world to smirk about online, then use to exploit the very customers they once were charged with protecting.
Of course, this isn’t a prescription for Boeing. A course of action something like this won’t fix Boeing’s problems. I can’t begin to prescribe what they should do, although I know that what they’re doing now won’t be enough. Their problems are just beginning, because they continue to follow the “too little, too late” playbook (for example, where’s the chairman’s resignation? How could somebody possibly have presided over such a debacle and remain at the helm?). Amazon could possibly end their problems in the near future, or at least get them to the point where they’re manageable, as J&J’s problems ultimately were. But they need to do a lot more than they’ve done so far.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Thursday, August 15, 2019

The Cloud’s Achilles heel(s)


I’ve come to believe the Wall Street Journal has the best coverage of cyber security issues of any general news publication. Exhibit A is today’s follow-up, by Robert McMillan, to their article last Monday on the Capital One breach. Here are six important statements from the article:

  1. In a court filing on Tuesday, prosecutors said Paige Thompson had stolen “multiple terabytes” of data from more than 30 companies, educational institutions, and others.
  2. Ms. Thompson had “expressed frustration over her 2016 dismissal from Amazon…” in online discussion forums.
  3. Amazon had previously said that “..none of its services were the underlying cause of the break-in.”
  4. However, on Wednesday a company spokesman said the company is now “running checks and alerting customers if they have the kind of firewall mis-configuration that Ms. Thompson allegedly exploited.” Furthermore, “Amazon is also considering additional changes it can make to its cloud subsystems that will better protect its customers”, according to a letter dated Wednesday.
  5. Winning my 2019 Clueless in Seattle award hands down, an Amazon spokesman said “Other than Capital One, we haven’t yet heard from customers about a significant loss.” This is roughly the equivalent of the White Star Line, in their annual report for 1912, saying “Other than the loss of our ship the Titanic, we had a very good year.”
  6. The most significant statement in the article – from my point of view, anyway - is “Security experts who have viewed her posts said Ms. Thompson displayed a high level of technical knowledge of the inner workings of Amazon’s cloud.”

Here are my takeaways from the article:

a)      It seems very possible that all of the 30+ organizations that Ms. Thompson may have breached were Amazon customers. It wasn’t that she was pinging random IP addresses and just happened to find C1 and these other Amazon customers, who had poorly-configured firewalls. She was almost certainly just attacking Amazon customers.
b)      Ms. Thompson had been terminated by Amazon in 2016, and obviously felt she was treated unfairly. So there’s the motive (she doesn’t seem to have been trying to make money off all of the data she stole, although that’s not certain yet, of course).
c)       The sentiment last week – both Amazon’s and I believe the online community’s in general – was that this breach was mainly a result of Capital One’s poor management of their online firewalls. However, that position is a little hard to maintain, now that there are at least 30 other victims. Sure, they all probably had made mistakes on their firewalls. But if any system is so difficult to figure out that 30 companies don’t get it right (plus God knows how many other Amazon customers who weren’t lucky enough to be on Ms. Thompson’s target list), it seems to me (rude unlettered blogger that I am) that Amazon might look for a common cause for all of these errors, beyond pure stupidity on the customers’ part. After all, there’s another large American company that’s in a bit of a pickle nowadays because they have a system that, while it probably works as specified, requires airline pilots to figure out a previously unknown (to them) problem in the 2-3 minutes they have left before they and their passengers all die – not exactly a great design, IMHO. So far, nobody has been killed by an Amazon breach – that’s one thing they can be happy for in Seattle.
d)      Fortunately, it does sound like Amazon is finally acknowledging that they have to share the blame for these problems, as well as figure out a solution – better training for customers, hopefully redesign of the systems to make them more understandable, etc.

But I almost forgot – this blog is about securing the Bulk Electric System, as well as information about the systems that run it. What does the above mean for that?

I think today’s WSJ article shows there’s a big Achilles heel (in fact, two of them) to cloud security. The first is the one we already knew about last week: that making security totally the customer’s responsibility is inviting trouble. The CSP’s have to do a much better job of educating customers on the risks they face and how to mitigate them, as well as of providing them tools that are understandable. And the CSP’s probably do need to look over their customers’ shoulders as they configure their firewalls, even if their lawyers will probably get apoplectic about the liability this might cause (memo to Amazon’s lawyers: It’s a little late to be worrying about liability in this matter. If you think Amazon is going to get off scot free in court when the lawsuits from Ms. Thompson’s breaches are all settled, you’re grievously mistaken).

But the big Achilles heel is the fact that, while the CSP’s are probably doing an excellent job of protecting against insider threats, they’re not doing any job at all (at least one of them isn’t) in protecting their customers against a disgruntled employee who was fired three years ago and has lots of knowledge of how internal security works at the CSP – as well as of how that security is typically misconfigured by customers. How do they protect against this threat? I suggested last week that maybe there’s a machine that can suck from an employee’s brain all of the knowledge they’ve accumulated about the CSP’s security, once they’ve left the company. At the moment, that’s about the only practical solution I can think of – but I’m sure it’s not ready yet (perhaps it’s in beta). Or maybe they can incarcerate any employee they fire on a desert island somewhere for say five years – at which point any knowledge they may still have would probably be useless. Or maybe they can bring new meaning to the term “employee termination”.

So many possibilities.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Friday, August 9, 2019

An ex-auditor provides a model for encrypting BCSI in the cloud


After my first post on BES Cyber System Information (BCSI) in the cloud on Monday, Kevin Perry, formerly Chief CIP Auditor for SPP RE, emailed me

“I believe the following model goes a long way to protect BCSI in the cloud:

  1. The cloud service provider furnishes the servers and network infrastructure necessary for the NERC entity to store and manage its BCSI. 
  2. The entity manages security for its own network and servers.
  3. The entity owns and manages the data itself.  BCSI should be protected by encrypting it. 
  4. The entity needs to control the encryption keys, as well as who has the ability to decrypt the data. 
  5. BCSI access is limited to the entity’s staff who are authorized to access it.
  6. The encryption tools should automatically encrypt the BCSI any time it’s copied or moved to the CSP. If this doesn’t happen, the staff moving the data to the cloud must ensure the data is encrypted prior to the move.
  7. The CSP’s staff has absolutely no access to or control over the encryption keys and the encryption/decryption process.”


I agree with Kevin that this is a good model.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.

Thursday, August 8, 2019

How can CIP be amended to allow BCSI in the cloud?



After Monday’s post appeared, I had extended email conversations with three people who are very involved with NERC CIP at their organizations (one is a retired CIP auditor – but he’s still very involved with CIP!). The conversations all centered around the recommendations that I provided for changes to the current CIP standards to accommodate BES Cyber System Information (BCSI) in the cloud (I made it clear at the start of Monday’s post that the question of BES Cyber Systems themselves being allowed to be put in the cloud – e.g. outsourced SCADA – is very different).

On Monday, I stated that I thought the best way to make sure all important cloud risks are addressed by CIP was for the drafting team to do two things:

1.       Develop a risk-based standard or requirement like CIP-013 that requires the entity to develop a plan to identify, assess and mitigate[i] cloud risks.
2.       In the standard, provide a list of areas of risk that need to be addressed in the plan, while leaving it up to the entity to identify the particular threats in each area (a term I prefer to “risks” in this context, although risks will work almost as well) that apply to them (this is something that wasn’t done for CIP-013. This makes it flawed, but it’s still the closest thing to a model for all other CIP standards that I know of).
3.       For the four existing CIP requirement parts where this is needed, develop language for the Measures that would allow entities to cite a cloud provider’s certification under FedRAMP (and possibly other certifications) as evidence that they have complied.

I believed previously that FedRAMP would most likely address any possible threat that applies to data stored in the cloud, but – and this was the main point of my post – the Capital One breach points to a threat that clearly isn’t covered by FedRAMP. Since I never completely stated that threat, here it is now (or at least, what I understand of it), broken down into logical steps:

a)      Each organization’s level of security within the cloud service provider’s network is usually their responsibility, not the CSP’s.
b)      That level of security is usually dependent on the organization understanding the differences between – say – configuring a firewall in the cloud and configuring a firewall on a separate standalone network, like their own.
c)       It seems many organizations don’t completely understand those differences. In particular, according to the Wall Street Journal, understanding Amazon’s “metadata” service is particularly important.
d)      Paige Thompson, the Capital One hacker, mentioned in online forums that a lot of Amazon clients don’t understand this well, so they have misconfigurations.
e)      After she left Amazon, she exploited this knowledge to hack into C1 and – by her admission – at least two other firms.

So the threat is that a CSP employee will utilize his or her knowledge of likely customer misconfigurations to penetrate a customer’s systems at the CSP and obtain BCSI that way.

What are the mitigations for this threat? On the utility’s part, the most important mitigation is to make sure they understand all the nuances of the CSP’s network that could affect their security configuration, and if that is just too difficult for them, they should pay the CSP itself (I assume this is always an option) to configure their firewalls – and other cloud-based security devices – for them.

But the CSP also has a mitigation they need to implement: They need to do a much better job of explaining these nuances to their customers, since it seems that Amazon (at least) hasn’t done this well enough. However, if C1 is really the only AWS customer who has made these mistakes, then I’d put all of the blame for the breach on them. But until I know whether or not C1 was unique, I have to assign equal blame to Amazon and C1.

This is where things stood for me on Monday regarding required changes to CIP to accommodate BCSI in the cloud. However, on Tuesday (as I’ve mentioned), three individuals told me several things that caused me to change my position (since one of the three – the ex-auditor – focused on a different issue than the one in this post, I’ll put his short comments up tomorrow night).

The most important of these was pointed out to me by a longtime friend who is in charge of CIP compliance for a mid-sized NERC entity, and who has some knowledge of discussions on the BCSI-cloud question in the NERC community. He said that the main thrust of the discussion within NERC circles appears  for the time being to have shifted away from an administrative approach of allowing or accepting FedRAMP as the primary evidence of the vendor’s compliance (which of course had been my assumption. When I last participated in those discussions, that was definitely the idea) to a revised Standard language approch focused on making encryption of BCSI in the cloud (and perhaps in transit to and from the cloud) the main means of compliance with CIP-004 R4.1.3, CIP-004 R4.4, CIP-004 R5.3 and CIP-011 R2 (both parts). These are the requirements that cause all of the problems for BCSI in the current CIP standards. This could be done by changing the wording of these Requirement parts (as well as their Measures), so that encryption of the data is an option for compliance. In fact, my friend thinks that this is now being talked about as the only means of compliance – that is, the only thing that would be needed to show compliance is encryption; FedRAMP wouldn’t necessarily come into the picture at all.

To be honest, I find this idea pretty sub-optimal from a security point of view. Sure, encryption will solve almost every cloud risk I can think of (it definitely covers confidentiality and integrity. Availability would presumably be covered by having duplicate BCSI repositories elsewhere, either at the entity itself or at a different cloud provider). But I’m a firm believer in defense in depth, and putting all of your security eggs in the encryption basket seems to be like depending on the Maginot Line to protect you. Yes, it will protect you fine until something changes about your assumptions (the keys aren’t protected properly and are stolen, quantum computers become available that can make mincemeat of any conventional encryption, or something else), and then it doesn’t protect you at all.

But what if we do both? Along with the encryption requirement, what if the SDT also changed the Measures section for each of those four requirement parts, so that an entity could use their CSP’s FedRAMP certification (or encryption) to show they are compliant with that requirement part? That would help, but it’s not going to cover all cloud risks.

Here, I want to add something another friend of mine – Brent Sessions of  the Western Area Power Administration – pointed out to me yesterday: There’s another whole area of threats that I’ve been ignoring regarding BCSI in the cloud, which should have been clear as day after the C1 breach. These are threats due to the fact that the cloud customer usually is responsible for their own security and – believe it or not – even customers can make mistakes.

As I said, I consider Capital One to be half responsible for their breach, because they hadn’t properly configured their firewall or firewalls – which were virtualized on the AWS network; the fact that AWS probably didn’t provide C1 with enough of a tutorial on the metadata service doesn’t mean
 C1 is off the hook morally. At the least, they could have pressed AWS for more information until they did understand the service. Of course, C1 has taken responsibility on their part, but if lawsuits against C1 start flying around later on, I wouldn’t rule out the possibility that C1 might invite (ever so nicely, of course) AWS to share the defendant’s table with them.

I believe there need to be three major elements in a new requirement part, which should be part of CIP-011 R1 (this is of course the current CIP requirement for information protection, and it’s risk-based to boot): 1) encryption of BCSI (including key protection and encryption of BCSI in transit to or from the cloud), 2) certification of the CSP, and 3) mitigating risks caused by the fact that the entity is most likely responsible for its own cloud security, while at the same time it’s operating in a new environment that’s in many ways very different from that of standalone networks and standalone firewalls.

I suggest that the drafting team draft a new CIP-011 requirement part R1.3, which would contain wording something like this:

“Measures to protect BCSI stored with a Cloud Service, including:
  • Methods to encrypt data in storage
  • Methods to encrypt data in transit between the Responsible Entity and the CSP (I went back and forth with my friends on the question of data in transit. We all ended up agreeing that a) data in transit definitely poses a risk to using cloud services to store BCSI, and b) if the entity doesn’t want to mitigate that risk, then they should keep all of their BCSI inhouse).
  • Methods for protecting the encryption keys, so that only the Responsible Entity's full-time employees or contractors who have BCS access have access to them
  • Assurance of the quality of the CSP’s security practices, evidenced through certification under FedRAMP (or other certs like SOC II, ISO 27001/2, etc. Of course, there's a lot more to it than just having a "FedRAMP certificate", if one even exists. There would need to be an explanation of what actually constitutes acceptable certification) - or through another acceptable certification process (this would cover the case where the entity – especially a government one - has no choice except to use a private cloud operated by themselves or another organization. This cloud might possibly have some other certification. The entity would need to show evidence that this was of similar quality to FedRAMP).
  • Methods to secure the physical or virtual security devices that protect the Responsible Entity's data in the CSP's cloud environment, if those measures are by contract the responsibility of the RE, not the CSP.” (and the SDT should do a risk analysis, perhaps with one or two security experts who really understand the cloud environment. In fact, maybe Paige Thompson would be available! :)  )

Besides this, there would need to be changes to the Measures sections (and possibly the Requirements themselves) of CIP-004 R4.1.3, CIP-004 R4.4, CIP-004 R5.3 and CIP-011 R2 (both parts) that would allow encryption, CSP certification[ii], or both to be used as evidence for compliance.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.


[i] Although I hope the SDT doesn’t make the same mistake the CIP-013 drafting team made, namely leaving the word “mitigate” out of R1.1, meaning the entity could legally call it a day if they just identified and assessed risks, but didn’t do anything about them! I actually had one person – from a software vendor to the power industry, although one that I doubt actually sells software for BCS so CIP-013 doesn’t apply to their customers – try to tell me this omission was intentional, so the entity really doesn’t need to mitigate supply chain risks once they’ve identified and assessed them. I pointed out to him that none of the guidance or anything else that’s been written on CIP-013 makes sense if he’s right. I don’t know if he still takes that point of view or not.

[ii] I learned today that FedRAMP has different levels of certification, and the medium level includes encryption. Since Capital One shows that at least some AWS customers don’t have encryption, this means that FedRAMP needs to be looked at as as much of a marketing concept as a certification. In particular, if the SDT decides that both encryption and FedRAMP certification will be required, NERC entities will need to make sure their CSP is FedRAMP certified and offers services at the medium level. Of course, fries are extra.

Monday, August 5, 2019

What does Capital One mean for NERC CIP and the cloud?



I’ve been intending to write two posts on CIP and the cloud, since that topic is much in the air nowadays. I want to do two separate posts because there are two very different questions: 1) what it will take to officially allow BES Cyber System Information (BCSI) to be put in the cloud, and 2) what it will take to allow NERC entities to put BES Cyber Systems themselves in the cloud. The Capital One incident announced last week provides a perfect excuse to write the first of these posts.

Here’s where we stand now on BCSI in the cloud:
  1. There is nothing in the current NERC CIP requirements themselves that prevents putting BCSI in the cloud for High, Medium and Low impact BCS; in fact a number of entities are doing that now, including many using a cloud-based configuration management service.
  2. I wrote a series of posts (starting with this one in early 2017) that described what needed to be done to comply. These were based in large part on input from an auditor, who I can finally reveal is Kevin Perry, former Chief CIP Auditor of SPP-RE, now retired (in fact, he was quoted in many of my posts, especially during the difficult years of the CIP v5 rollout). Kevin wants me to point out that “the comments (in the post) are a couple of years old and might not reflect current thinking on the part of NERC, the Regions, or the Standards Development Team.”
  3. The most important point was that the entity needs to include provision for cloud service providers in their CIP-011 R1 information protection plan – they need to write reasonable requirements for the CSP and make sure the CSP is following them; this shouldn’t be hard to do.
  4. But the entity also needs to comply with four requirement parts in CIP-004, all having to do with things the CSP needs to do. Of course, there’s no question that any CSP isn’t already doing all four of these things, but the problem is evidence. With the prescriptive NERC CIP requirements like those in CIP-004 (as opposed to objectives-based ones like CIP-007 R3), the NERC entity is required to have evidence that each of these things was done every time it was required. This means that the CSP will need to have evidence that any of their perhaps thousands of employees who had access to the room (which could be huge) where the servers that contained the entity’s BCSI were located, and was terminated, had their access to that room revoked within 24 hours. Asking the CSP to maintain this evidence would obviously be the same thing as asking them to give up their whole business model in order to please a few electric utilities. Good luck with that.
  5. Nevertheless, until recently I believed that all of the NERC Regions were OK with BCSI in the cloud, although I was told a couple weeks ago that one Region still says it isn’t allowed (despite the fact that I heard one of this Region’s most-respected auditors describe at a compliance workshop in 2016 how to comply with CIP for BCSI in the cloud! Of course, I was shocked – Shocked! – to learn there is inconsistency in statements by the NERC Regions, even within one Region. Who would have thought this was possible? J ).
  6. And speaking of consistency, NERC is now working on a project called Align, which will consolidate the standards enforcement process across all of the Regions. And guess where that system – including tons of data that is without doubt BCSI – will reside? I’ll give you three choices: 1) NERC headquarters. 2) My basement. 3) The cloud. If you guessed 3), you’re a winner! I sure hope NERC’s able to pass their CIP audit next time.

To make it clear that BCSI can be stored in the cloud, and to determine how NERC entities can provide evidence that it’s secure, a new Standards Drafting Team was created and has already started meeting. I haven’t been able to attend their meetings, although I hope to at least review drafts when they start producing them.

The SAR for this team makes it look like their job will be pretty simple (and up until last week I thought it was): Modify CIP-004 so that the focus is on the cloud provider’s controls for electronic and physical access to BCSI and the devices on which it is stored. In practice, everyone was anticipating that it was a certainty the team would identify one or two certifications – with FedRAMP the far more likely one – that would constitute evidence for compliance with the modestly revised CIP-004 requirements (although the SAR also points out the need to revisit CIP-011 R1. Since that’s a completely objectives-based and risk-based requirement, I can’t see too much need to revise it, although explicit mention of data services like cloud providers – rather than just referring to “third parties” – would be a help).

Of course, determining what FedRAMP “compliance” means (and how it relates to the CIP-004 requirements) isn’t simple at all, since different audits cover different things. And it might be hard to get AWS or MS Azure to just hand you their latest FedRAMP audit report in toto. There’s a lot to be negotiated here. But until last week, I – and others – was saying “There’s no question that no utility can have a level of security on their own network that remotely approaches the average cloud provider’s security level. This will work out fine. What could possibly go wrong?”

And now there’s the Capital One breach. I’m sure some will jump up and say “The breach was Capital One’s fault, and they’ve admitted it. Amazon provided a secure infrastructure like they promised. The fact that C1 didn’t configure their firewalls correctly is their problem – they had total control of them, even though they resided on Amazon’s network.”

From a legal point of view, that’s probably true; I wouldn’t sell all of your Amazon stock right now, if you were thinking of doing that. But the big fly in this ointment, from a pure security point of view, is that Paige Thompson had worked for AWS until recently, and seems to have exploited the knowledge she gained there to breach Capital One’s systems on AWS. In fact, she hinted at having penetrated a couple other AWS customers as well.

Since she wasn’t an employee at the time of the breach, this wasn’t an inside job – yet to pretend that this is something that any competent hacker anywhere could have done, and her former-employee status had absolutely nothing to do with the fact that she was able to hack C1, seems to me to be bending over backwards to make all of this go away. She had inside knowledge (including of likely mistakes that customers would make in configuring their firewalls on AWS), and that played a big part in her breaking into C1.

But let’s get back to the CIP issue: Amazon was FedRAMP certified, yet this happened. Suppose the SDT waits until they have good information on what happened and knows how Amazon could have prevented it, then writes that down as a procedure. We’ll call this Procedure X (it might be a new technology to erase from the person’s brain all memory of anything technical they’ve learned while working at AWS, something benign and non-intrusive like that). The team then adds some sort of provision in the Measures section of CIP-004 that says a FedRAMP certification, plus evidence that Procedure X is in place, is all that’s required to show that the entity is compliant with the four requirement parts in CIP-004. Would that be sufficient?

I shudder to think there are a few people who might think that is sufficient, so I’ll answer this question myself. Of course it wouldn’t be. People are endlessly creative. If they want to wreak havoc and they already have a good knowledge of cloud systems, they’ll figure out another way to do it if they can’t emulate Paige Thompson. And don’t say the answer lies in better training on cloud security for the NERC entity that’s storing information in the cloud. Yes, that’s required, but to say that’s the answer is to say that if only people were perfect and never made mistakes, we wouldn’t have any problems like this.

IMHO, the SDT needs to start looking at a risk-based approach to securing BCSI in the cloud. CIP already has at least four risk-based requirements (including CIP-010 R1, as well as CIP-007 R3, CIP-010 R4 and CIP-003-8 R2) and one risk-based standard (CIP-013)[i]. The requirement that I think best illustrates what this requirement should look like is CIP-010 R4, which mandates that the NERC entity develop a plan to manage the risks of Transient Cyber Assets and Removable Media. Granted, the words “manage the risks of” are nowhere to be found in R1, but effectively that’s what is required.

The plan’s objective is to protect BES assets from risks posed by TCAs and RM. While CIP-010 R1 Attachment 1 (which is part of the requirement) provides guidelines on areas of risk that should be addressed in the plan, there are no prescriptive requirements saying “You either do X within Y days or you get your head cut off.” When you’re making a plan to do something and you’re not told exactly what to do, you need to make trade-offs. You consider roughly the amount of spending (and staff time) that you can get away with in achieving the objective of the plan, and you try to allocate that so that you mitigate the most risk possible given your available resources.

Of course, CIP-013 uses the word “risk” freely; there’s no question it’s risk-based. However, the biggest problem with CIP-013 is that it doesn’t give you any guidance on the types of risks you should look at mitigating – e.g. risks of vendor-caused software vulnerabilities, risks due to provenance, etc. I like CIP-010 R4 Attachment 1 because it does list areas of risk to address: TCA Authorization, Software Vulnerability Mitigation, Introduction of Malicious Code Mitigation, etc.

So I think whatever the SDT develops should require the NERC entity to develop a plan to mitigate risks attendant on putting BCSI in the cloud; then it should suggest particular areas of risk that should be addressed. Of course, most of them will be areas of risk addressed now by FedRAMP, e.g. authentication, vulnerability management, etc. You shouldn’t have to do any extra documentation – and you certainly shouldn’t have to require Amazon to negotiate a separate contract with you if they want your business! – if the SDT decides that whatever FedRAMP requires is adequate to address these areas.

But clearly FedRAMP doesn’t address all areas of risk! So I think the SDT needs to put their thinking caps on and identify risks it doesn’t address. Here’s an idea: What about risks due to ex-employees utilizing their knowledge of likely vulnerabilities caused by customers not completely understanding the inner workings of AWS systems to hack into those customers? My guess is the SDT could come up with other risks like that through talking with experts and just thinking through these things logically.

I’m actually doing a lot of work like that right now. My CIP-013 clients – and all NERC entities who have to comply with CIP-013 – are facing the challenge that R1.1 requires them to “identify and assess” supply chain cyber security risks to the BES, yet it provides no further guidance on where to find those risks. So we’re looking through lots of NIST documents, guidance papers, procurement language from other utilities, etc. And we’re participating in the discussions of the NERC CIPC Supply Chain Working Group, which are excellent. The goal is to put together a list of what my customers consider the supply chain cyber threats that pose the highest level of risk to their BES assets, and figure out how to mitigate those risks within the utility’s own structure and limitations (especially its resource limitations!). The goal is to achieve the most mitigation of BES supply chain security risk possible, given the fact that these utilities don’t have unlimited budgets for cyber security (or anything else, of course).

But I came to realize early on in this process that, as far as risk identification and assessment go, there is just about nothing that’s specific to any particular utility. The areas of risk could and should be identified on a national level, with the entities then being free to decide how much emphasis to put on mitigating risks in each of those areas, depending on their particular environment. This is essentially the approach used in CIP-010 R4.

It would be even better if the new cloud SDT could actually produce a list of threats within each of those risk areas (which is what I’m doing with my clients for CIP-013). Using this list, the entity could then a) add other threats that the SDT didn’t identify, b) assign a risk score (presumably high/medium/low) to each threat, based on their own estimates of probability and impact, c) rank all of the threats by risk score and choose the highest-risk threats to mitigate, then d) develop plans to mitigate the selected risks.

But it would be a huge deal for the SDT to get to the level of identifying individual threats, so I’ll settle for just identifying the risk areas. This will itself be a big help for NERC entities, since they will at least have a starting point for risk identification. Even more importantly, it would give the auditors something to hang their hats on: They could go through each area of risk to make sure the entity understands a) the threats involved in that area, b) the vulnerabilities that enable those threats to be realized, and c) the mitigations for those vulnerabilities. If the entity doesn’t seem to understand something, they certainly won’t give them a Potential non-Compliance finding (those will be reserved for entities that just blow off this whole process), but they are likely to issue an Area of Concern, so that the entity will address the identified problems by their next audit.

My bottom line observation on BCSI in the cloud in CIP is that it’s a moderately hard problem to address, but certainly can be done. In one of my next posts (I no longer say “my next post”, since every time I do that it ends up being 3 or 4 posts later, and sometimes never), I’ll get into the question of BCS themselves in the cloud. That’s nowhere near such a nice picture.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Please keep in mind that if you’re a NERC entity, Tom Alrich LLC can help you with NERC CIP issues or challenges like what is discussed in this post – especially on compliance with CIP-013. To discuss this, you can email me at the same address.


[i] Some people say that CIP-014 is a risk-based standard, but I still haven’t made up my mind on that. Whether it’s risk-based or not, it’s been sometimes audited as a prescriptive standard.