Sunday, August 30, 2020

The two most important supply chain cybersecurity risks



If you're looking for my pandemic posts, go here.

I’ve been saying for quite a long time that the biggest problem with CIP-013-1 is that it doesn’t provide you with guidance on the risks that should be addressed in your supply chain cybersecurity risk management plan, which is required by R1. Along with my clients, I’ve identified about 110 supply chain cybersecurity risks. I think every NERC entity, in their CIP-013 R1.1 plan, should consider a wide range of risks, then identify the most important of those to mitigate.

Here are what I consider the two most important supply chain cybersecurity risks to the Bulk Electric System. They both are due to a software supplier’s mismanagement of risks due to third-party or open source components of their products. I’ve been writing about software component risks a lot lately. As I mentioned in this recent post, about 80% of Web applications include an average of 73 third-party or open source components, yet half of the suppliers of those applications don’t apply security patches for those components.

My number one risk is that a software supplier won’t patch vulnerabilities in third-party or open source components of its software, even when a patch is available. Of course in some cases, they may have a legitimate reason for doing this, since sometimes the supplier has only implemented part of a software component and the vulnerability is in a different part, meaning applying the patch will do no good.

However, this explanation certainly doesn’t account for anywhere near the number of component patches that are never applied. The two more important reasons are:

a)      The supplier doesn’t even know a patch is available for a vulnerability in a component because they’re not proactively keeping in touch with the third-party supplier or following the open source community that wrote the component and is providing patches. Of course, you wouldn’t dream of not regularly reaching out to your software suppliers every 35 days (as required by CIP-007 R2.2) to find out if they have a new security patch available for the products that you have installed. Why shouldn’t your software suppliers do the same with their software suppliers?
b)     The supplier has lost track of the components of its own software, meaning the supplier no longer knows what is in its own software. Of course, this is an even more serious problem, since it means they’ll probably never know of vulnerabilities in the components that comprise up to 80% of the code in its software. If the supplier can’t give you any list at all of the components in its software, you need to start asking why you’re still buying that supplier’s products.

What’s the mitigation for this risk? A software bill of materials. For every software package that plays an important role in your BES environment (which might be just a few, of course, depending on the size of your BES footprint), you should ask the supplier for a first level software bill of materials (SBoM) – that is, a list of the components in their software, but not necessarily a list of components of those components, even though these also pose a risk (as do components of components of components, etc). For a discussion of how not having good SBoMs can play havoc with your effort – and that of just about every other organization with IoT or IIoT systems - to mitigate risks due to vulnerabilities in third-party software components, see this post).

But what steps can you take to mitigate these risks, since only the supplier knows what components are in their software? This is where the idea of a software bill of materials (SBoM) comes in. You should ask every software supplier to provide you at least a one-level SBoM (i.e. a list of the components in their software, but not necessarily of components of those components). And as I described in my most recent post, just asking the question lets the supplier know you’re concerned about this problem, and makes it more likely they’ll take it seriously, if they weren’t already.

There are two possibilities for the supplier’s response: They can give you at least a partial SBoM or they can give you nothing at all. If they give you some list, even if incomplete, you can try to get in contact with the third-party component suppliers on that list, as well as the open source communities in the case of open source components – and simply request to be on their mailing list for notices of new vulnerabilities and patches.

When you learn of a new patch for a component that is on the SBoM you received, you can contact your supplier and ask when they’ll apply the patch. If they say they can’t apply the patch for some technical reason, you should ask what mitigation they will apply, especially if the vulnerability the patch addresses is a serious one (after all, you have to develop a mitigation plan if you can’t apply a patch, per CIP-007 R2.3! They should as well). And in the case of a serious vulnerability, you should follow up to make sure they do something about it.

My second most important risk also has to do with software components, and also requires an SBoM for mitigation. This is the risk that a supplier will, inadvertently or otherwise, allow a third-party or open source component in their software to remain there after it has gone end of life, so that newly discovered vulnerabilities will not be patched. Again, there are two main reasons why this could happen:

a)      Your supplier hasn’t been in touch with the third party or open source community that wrote a component and didn’t realize they’d either discontinued support for that product or gone out of business.
b)     Your supplier did know this but decided not to do anything about it. They were hoping you wouldn’t notice this either, or even better that you wouldn’t ever ask to see an SBoM so you could track this information yourself.

Of course, without an SBoM you have no way of mitigating this risk. This is another reason to ask for one from any important software supplier. Whenever you learn a component supplier on this list has discontinued a component that’s in the software you have deployed, or if you learn that supplier might soon cease operations, you need to have a frank talk with your supplier to discuss how they will address this new risk. Will they

1.      Replace or upgrade the component?
2.      Write their own code to replace the component? After all, it’s their software – they should certainly be able to do that.
3.      Whenever a vulnerability is discovered in the component from now on, will your supplier write the patch themselves? Of course, they can only do this if they possess the source code for the component. Ideally, they would do what SEL does, namely – whenever they incorporate a third-party component into their software – obtain the source code from the supplier up front. But even if your supplier doesn’t do that, if the component supplier is going out of business or discontinuing support, your supplier should be able to buy the source code from them at that point (of course, for open source components this isn’t a problem, since the source code is available for free).
4.      Will your supplier watch the situation carefully, and if an important vulnerability is discovered in the component – for which there will be no patch – will they immediately develop a workaround so that you and other customers will be protected? This isn’t the best solution, but it’s better than nothing. You should still press them to replace the component, though.
5.      Will they do nothing at all? If they really say this, then you need to sit down with your lawyers and examine your options.

As you’re perhaps beginning to see, a software bill of materials is a very powerful tool. While you won’t always be able to get it, you should at least make the effort – with your most important software suppliers.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you hot at work – or should be – on getting ready for CIP-013-1 compliance on October 1? Here is my summary of what you need to do between now and then.


Friday, August 28, 2020

Is software bill of materials the answer to all of our problems? Part III



If you're looking for my pandemic posts, go here.

People normally think of questions as a means of soliciting information. However, in many cases the question is the message. For example, if a mother asks her child whether he’s brushed his teeth that morning, she’s doing it because she’s conducting a scientific study of tooth brushing. Rather, she’s sending him a message: “Brushing your teeth in the morning (and evening) is important for your health and well-being, and I care about both of those things.” Even if he has brushed his teeth, he will absorb this message and it will reinforce this behavior (unless he’s a teenager, in which case all bets are off).

And the same goes for software bill of materials (SBoM). This was brought out very clearly on the fourth page of a six-page infographic piece in Dark Reading (which is an excellent cybersecurity newsletter. I’ve been reading it for 12 or 13 years, ever since I was in college 😊). Here’s that whole page:

“If you have nothing to hide..." can be the beginning of a facile and overly simplistic statement about visibility. At the same time, if a supplier is unwilling or unable to provide transparency into their risks, it's a clear sign that your risk from that third party has risen by many degrees.

According to Alex Santos, CEO and co-founder of Fortress Information Security, the question for many in the supply chain is not only how transparent they're willing to be, but how transparent they're able to be.

"It's getting down to that level of detail, the bill of materials, the suppliers of the suppliers, if you will, that is underlying and important to the supply chain," he says. "That [understanding is] one of the next frontiers."

The issues are similar for both software and physical components that make up the supply chain, Santos says.

"Where is each piece of hardware constructed, and where do those components come from? Has the security of open source code been verified? Where was it developed? Was it a community based in China or around the globe?" he asks. "The same thing goes for the hardware. Was a hardware producer in China? Are there sufficient controls in place to make sure that that hardware is free of backdoors and other malicious constructs?"
Each organization, and each industry, will have different requirements for how far down the supply chain these questions must be answered. Direct suppliers, especially, must be willing and able to assist in the drive to transparency, or they'll become a primary indicator that supply chain corruption is possible.

I know very few NERC entities (or really any type of organization) are asking their suppliers for an SBoM now (even just a first-level SBoM, which is probably the most you’ll get at this point). But, even if you’re doubtful you’ll get it, you should always at least ask the question, for three reasons:
1.      The Supplier may actually be able to give you an SBoM. You can use that information to mitigate your own supply chain cybersecurity risks, as described in the previous post in this series.
2.      If the supplier can’t give you a full first-level SBoM, you should make clear that it’s really unacceptable that the supplier doesn’t even know what’s in their own product. And ask them when they’ll have that information. If they can’t even tell you when that will be, you should seriously start looking for another supplier, assuming there is a real competitor (in many cases when it comes to control systems, there’s no serious competition and you’ll stick with this supplier anyway. But you don’t have to depend on the supplier to provide you an SBoM, as I’ll discuss in another post in this series. The series may stretch into 2022, the way it looks now).
3.      They also may tell you they do have a good SBoM, but they can’t give you that information because it’s a trade secret. Unfortunately, they do have a point, and it’s probably not worthwhile to get all excited about this and threaten to drop their product. However, as I said above, there are other ways to get an SBoM, not just from your supplier. And an SBoM can certainly be shared securely between a supplier and its customers, including of course NDAs signed by the customers. My guess is that, as recognition of the need for SBoMs grows (both in general and particularly in the energy industry), suppliers will begin to see the advantages of securely sharing these documents.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you hot at work – or should be – on getting ready for CIP-013-1 compliance on October 1? Here is my summary of what you need to do between now and then.


Thursday, August 27, 2020

Don’t try this at home, kids!



If you're looking for my pandemic posts, go here.

I heard recently of a large NERC entity whose upper management overrode a good part of two years of work on the part of the people who had been preparing for compliance with CIP-013-1. They said that the entity was going to essentially ignore requirement R1.1 and just comply with R1.2, R2 and R3. In other words, they’ll ignore the requirement to develop a plan to identify, assess and (implicitly) mitigate supply chain cybersecurity risks to the Bulk Electric System, and only mitigate the eight risks addressed in R1.2.

This struck me as a pretty foolish thing to do, but I asked Lew Folkerth of the RF Regional Entity to comment on that. Here is what he said:

The NERC Reliability Standards were developed and complied with long before the Energy Policy Act of 2005 put them on the path to be mandatory and enforceable. Even the first version of the CIP Standards was in development prior to the mandatory and enforceable period. The Reliability Standards are, at their core, an agreement among all entities to behave in a certain manner in order to operate and maintain the reliability of the BES. By deliberately not implementing compliance with a Standard, an entity would break faith with their peers in the BES.

I have seen cases where an entity decides it can do a better job of mitigating risk by implementing its own processes and ignoring the Reliability Standards. In every case I’ve seen, the entity failed to implement reliability or security processes that were as effective as what was required by the Standard. I expect this to hold true for the Supply Chain Standards as well. There has been a huge amount of work performed in the development and support of CIP-013-1. The development of the Standard, ERO endorsement of Implementation Guidance, SGAS sessions at NERC resulting in the FAQ document, the SCWG’s Guidelines, and yes, even my Lighthouse articles, all have worked toward making CIP-013-1 the most documented and supported Standard in the portfolio of Reliability Standards.

In my opinion, throwing all of this material away would not be a supportable decision at any Registered Entity.

There’s more to it. In an email conversation with Lew after he sent this, he pointed out that a) by not complying with R1.1, the entity won’t be complying with R1 itself; and b) the entity can’t be compliant with R2 or R3 either, since they both depend on compliance with R1. So they might be in violation of all three requirements in CIP-013!

But other than that, I think this is a good decision.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you hot at work – or should be – on getting ready for CIP-013-1 compliance on October 1? Here is my summary of what you need to do between now and then.


Wednesday, August 26, 2020

The contract language bugaboo – Part I



If you're looking for my pandemic posts, go here.

Probably the biggest misconception about CIP-013 since it was drafted is that contract language has some sort of special place in compliance. I have heard a number of people, including people who work for NERC, one or two in high positions, say that the purpose of CIP-013 is mitigating supply chain cybersecurity risks through contract language – or something like that.

I’ve never believed that, but I’ve also never seriously investigated where the idea came from. However, since I’m now working on a book on CIP-013 compliance (and supply chain cybersecurity more generally), I decided to find out.

First, I asked whether there’s anything in the language of the requirements themselves that would lead to the conclusion that contracts have a special place. And here was my first surprise: Nowhere in the requirements is there any mention of contracts or contract negotiations in any way (I’m amazed that I never even noticed this before. After all, the entire text of the three CIP-013 requirements is about four sentences long).

However, there is a note to R2 (which isn’t part of the requirement but is part of the enforceable language, I’m sure):

Implementation of the plan does not require the Responsible Entity to renegotiate or abrogate existing contracts (including amendments to master agreements and purchase orders). Additionally, the following issues are beyond the scope of Requirement R2: (1) the actual terms and conditions of a procurement contract; and (2) vendor performance and adherence to a contract.

Of course, far from requiring contract negotiations for CIP-013-1 compliance, the first sentence of the above note explicitly states that contracts or contract negotiations are not required for compliance. And the second sentence effectively states that auditors are forbidden to require a NERC Entity to provide a contract as evidence. This leads to the conclusion that, if contracts were in some way required for CIP-013-1 compliance, that requirement would never be enforceable, since the NERC Entity is not obligated to prove compliance by showing the contract.

Here's how I understand the role of contract language in CIP-013-1:

1.      A NERC entity will often determine that a Supplier or Vendor has unmitigated supply chain cybersecurity risks that could result in damage to the Bulk Electric System. They will often determine this as a result of the Supplier or Vendor’s (I’ll say “Supplier” from now on, but I mean both) answers to a questionnaire that the entity provided to them.

2.      For example, one of the questions in the questionnaire I’ve developed with my clients is “Will we be provided with a complete inventory of accounts that exist in the product upon shipment to us?” This question is based on the risk stated as “A Supplier might create an account on a Product during development and not inform us of it.” If the Supplier answers No, this means there is a high likelihood that this risk is present in their products, so the risk is unmitigated (in my simple world, a risk is either mitigated – low likelihood – or unmitigated – high likelihood. I haven’t yet seen a good reason to go beyond that degree of granularity).

3.      Given that the Supplier has a high likelihood of having this risk present in their environment, the NERC entity – let’s say that’s you - should try to get them to do something about it. The mitigation for this risk is to get them to commit to providing that inventory whenever they ship a product. How will you do this?

4.      Of course, one way to get the Supplier to commit to doing something is to try to get them to agree to a term in their contract. But in a lot of cases there’s simply no option for a contract. For example, the Supplier may have a five-year contract that is halfway through its term; they may make it clear they’re not at all interested in renegotiating the contract now.

5.      Do you just throw up your hands and say “Oh well, we tried”? No, there are other options, all of which are much less expensive than negotiating contract terms. Here’s an idea: Why don’t you pick up your phone and call the Supplier? Just ask them if they’ll do this.

6.      If they say no, and if there’s no current provision in their contract that would allow your lawyers to pressure them, at this point you might have to just go with plan B: Work out your own mitigation to the Risk in question. In the case of the Risk discussed above, one mitigation would be to make your own inventory of the accounts on the device, when you purchase another of their products but before you install it.

7.      Of course, this might not be too emotionally satisfying, since you’d certainly be forgiven if you harbored the desire to really stick it to this Supplier for turning you down flat on what should be a fairly simple request. So you’ll bide your time until their contract is up for renewal, and make sure they agree to this term then.

8.      But what if you call them and they say yes? I would think that there would be very few Suppliers – except very large ones that won’t even talk to you – that wouldn’t do their best to accommodate your request.

9.      At that point, you don’t just say thanks and hang up. You need to ask them when they’ll start doing this. If they say they can’t implement the policy for three months, then mark that on your calendar – and call them back then to make sure they’ve done it.

The most important step of the ones above is the last one. This is because getting a Supplier to agree to do something – whether they agree in a contract term, on a phone call, in an email or in their own blood on an animal skin – doesn’t in itself mitigate any risk. It’s only when they actually do what they promised that risk is mitigated. You always have to follow up with them to make sure they did what they promised, and if they say they haven’t done it yet but will in say another three months, you need to keep after them until they actually do it.

So contract language is simply one means of getting a Supplier or Vendor to agree to do something you would like them to do. In some cases and in some organizations, it might well be the best way of accomplishing that goal. In other cases and in other organizations, it’s not even an option (for instance, I know that in some federal government agencies, deviating from the standard contract in any way takes so much time and effort that it’s usually not even worth attempting). Whatever is the easiest and least expensive option for your organization is the best way to do it. Contract language doesn’t have any special status in CIP-013 compliance.

In part II of this post, I’ll describe what I realized when I looked into how this misperception came to be. You’ll be pleased to know that I identified the culprit.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you hot at work – or should be – on getting ready for CIP-013-1 compliance on October 1? Here is my summary of what you need to do between now and then.


Monday, August 24, 2020

Everything you always wanted to know about CIP-013 but were afraid to ask…


If you're looking for my pandemic posts, go here.

I’m pleased to announce that I and Dick Brooks of Reliable Energy Analytics will present an hour-long Q&A session on CIP-013 on Thursday afternoon; the signup page is here. The session is technically a follow-up to the excellent webinar that Dick presented on compliance with CIP-010-3 R1.6 a few weeks ago, although the questions can address CIP-013, CIP-005-6 R2.4/R2.5 and CIP-010-3 R1.6 – or any other topic you may want to discuss, such as the meaning of life in general.

I also highly recommend you watch the recording of Dick’s earlier webinar, which is here.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you hot at work – or should be – on getting ready for CIP-013-1 compliance on October 1? Here is my summary of what you need to do between now and then.


Thursday, August 20, 2020

What the EO should really be concerned with



If you're looking for my pandemic posts, go here.

Kevin Perry sent me a link to a great article today. It starts “A security flaw in a series of IoT connectivity chips could leave billions of industrial, commercial, and medical devices open to attackers.” I’ll let you read the article, but the bottom line is this is a serious vulnerability that could lead to outside attackers taking control of IIoT and medical devices. IBM discovered this vulnerability in 2019 and has been working with the manufacturer on a software patch since then (it’s amazing that it took more than eight months to develop the patch. We’re all lucky the vulnerability wasn’t discovered first by say Russia or al Qaeda).

Here’s a question: The Executive Order is supposedly concerned about big supply chain threats to the US grid. Would this threat be addressed by it? Of course not. The EO just addresses threats originating in products produced by organizations located in, controlled by or influenced by “foreign adversaries”. At the moment, France (where the company that makes the chips is located) isn’t on the list.

Of course, the biggest problem with the EO is it just focuses on threats from nation states. It wouldn’t take a nation state to exploit this vulnerability and cause widespread damage to the power grid (whether or not it caused an actual cascading outage. The people left in the dark aren’t going to take a lot of solace from the fact that the cause wasn’t a cascading outage).

And by the way, this definitely shows the need for hardware bills of materials as well as software bills of materials. The big difference between the two is there’s absolutely no excuse for a hardware manufacturer not to have a detailed BoM for their products (since they couldn’t have manufactured it without one). There’s also no excuse for them not to immediately notify you of this vulnerability, as well as provide you the patch. Of course, if you look through the reports and think that one or more hardware manufacturers whose products are in your environment (whether or not they’re in an ESP) might include this chip module, you shouldn’t wait for them to notify you – you should call them to ask. And if they say they just don’t know, you need to look for a replacement for that hardware ASAP. 

And what should be done with the EO? Here's an idea: Let's ignore the foreign adversaries nonsense. Let's treat it as an order to a) discover the most serious supply chain cybersecurity threats facing the US grid, regardless of their source, and b) take steps to mitigate them. As I mentioned in this post, vulnerabilities in hardware components are definitely a threat, but this is also not a threat that any one utility has the resources to analyze - this has to be undertaken by the Feds, presumably DoE (with help from DoD, since they're doing lots of work like this now). 

Then we might turn this sow's ear into a silk purse.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you hot at work – or should be – on getting ready for CIP-013-1 compliance on October 1? Here is my summary of what you need to do between now and then.


Wednesday, August 19, 2020

Is software bill of materials the answer to all of our problems? Part II



If you're looking for my pandemic posts, go here.

The first part of this post described a pretty scary problem: the fact that just about any software you purchase nowadays includes lots of third-party or open source components; those components contain components; those components of components contain components, etc. etc. A 2017 study by CA Veracode found that “83 percent of developers use code components to build web applications, with the average number of components per application (found to be) 73.” And a vulnerability in any one of these components might prove serious.

I used the recent example of the Treck TCP/IP stack, which was developed in the late 1990s (back in the days when internet security was hardly even an afterthought. I remember visiting one of the national labs at that time and learning that they didn’t even have a firewall. Moreover, they had internet-addressable IP addresses on all their computers, since – of course – the whole idea of the internet at the time was that it was to allow free collaboration; I don’t think it even registered with me that this might be a problem. How quaint that time seems today!).

Recently, researchers in Israel identified 19 vulnerabilities in the Treck stack, and they think there are at least twice that many still to be identified. Four of the 19 vulnerabilities were deemed to pose the highest level of risk per CISA. Fortunately, Treck is still in business and they were able to quickly develop patches for all 19 vulnerabilities.

But what’s the problem? The Treck stack is probably embedded in hundreds of millions of devices. Many – if not most - of the software suppliers whose products include the Treck stack probably have no idea it’s there. And since it’s been on the market for more than twenty years, it might very well be embedded in components that are more than ten levels down in a piece of software installed today. Let’s be honest: there’s no way more than a small fraction of the total instances of the Treck stack in use today will ever be patched. So patching really isn’t a good first-line mitigation for this risk.

What is the best mitigation? It’s pretty simple to describe: Your software supplier should have a complete software bill of materials (SBoM) that lists each third-party and open source component of their software, as well as the version number and supplier name (probably other information as well). Each of those components should have its own SBoM; each of those components should have an SBoM, etc.

If you get the full SBoM from the supplier, when a new vulnerability is announced in a particular software component, you’ll be able to do a simple search to find all instances of that component – if any – in each piece of software that your organization runs. If you have any hits, you’ll then compare the version numbers listed in the vulnerability notice to determine whether you’re running any instances of that version. If so, you’ll know exactly what machines to apply the patch to, and all will be good.

But to tell the truth (which I like to do every now and then. Breaks the monotony), even patching a few components will be a painstaking job, and in some cases literally impossible if the component’s code is compiled into your supplier’s product. But not to worry; this is really your supplier’s job, not yours. All you need to do is only to engage with a supplier that:

1.      Has a complete multilevel SBoM for their products, as described above;
2.      Will provide you the complete SBoM, so that you can contact some of the third party suppliers (or open source communities) and be on the list to be notified of new vulnerabilities and patches;
3.      Has a policy of always buying source code for any third-party component, meaning that if the third party goes out of business or for some other reason can’t patch a newly identified vulnerability in their code, your supplier will be able to develop their own patch for their customers (of course, they need a high level of expertise to do this. A 3-month course in Visual Basic™ – if they even have those anymore – doesn’t count);
4.      Will commit to patching any vulnerability discovered in any of their third party or open source components within say five days;
5.      Will commit to keeping track of the suppliers of all third-party components, to make sure they’re still in business and still able to provide patches when needed – and if your supplier determines one of their third parties isn’t long for this world, they commit to replacing that code in their software within say one month;
6.      Will commit to keeping track of the online community supporting each open source component of their software – and if the supplier determines that the community has dwindled to the point that patches aren’t being developed anymore, they will commit to replacing the code within say three months;
7.      Before releasing a new product or a new version of a product, will commit to scanning each component for vulnerabilities;
8.      When a third party supplier or open source community announces a new version of a component and end of support of the version incorporated in your supplier’s software, your supplier will either upgrade the component to the new version or find and install another component of equal functionality; and finally can
9.      Bend 6-inch wide steel bars with their bare hands, penetrate walls with their X-ray vision, and leap tall buildings with a single bound.

In other words, current reality is vastly different from the above. Here’s the true story:

·        Few suppliers could provide you with a complete SBoM of even the first level of components in their software, and that won’t often have even the name of each supplier or open source community behind a component, even if it identifies each particular component.

·       If they do have a complete SBoM (or say they do), they may refuse to give it to you for “competitive reasons” – meaning they don’t want to take the chance that you’ll give the list of ingredients in their cake to a rival cake-maker.

·       Very few suppliers would buy source code from every third-party supplier, even if it were available to them. The only reason I even know about this concept is it’s described in a NIST best practices paper about Schweitzer Engineering Laboratories (which has that policy). Plus I doubt a lot of suppliers would even have the level of expertise that SEL has, to develop their own patches for third party software.

·       I don’t have any data on this, but I’m willing to guess that few suppliers will commit to patching component vulnerabilities within any period of time, let alone five days. And even if they did, it would be a meaningless commitment if they didn’t at the same time commit to telling you about new vulnerabilities, even if they’re unpatched so far (which of course is what CIP-013 R1.2.4 requires); but it will be quite hard to get suppliers to commit to doing this. Yet if they only tell you about vulnerabilities they’ve already patched, they’re not committing to anything new when they “commit” to patching all vulnerabilities in five days.

·       The 2017 Veracode study quoted above also states “A mere 52 percent of companies reported they provide security fixes to components when new security vulnerabilities are discovered. Despite an average of 71 vulnerabilities per application introduced through the use of third-party components, only 23 percent reported testing for vulnerabilities in components at every release.”

·        You get the idea: Except for a few exceptions, your supplier is unlikely to commit to a single one of the above items. Probably the most likely one they’ll commit to is leaping tall buildings in a single bound.

So what’s a poor NERC entity (or any organization, for that matter) to do? Is there no shining knight that will ride in and save you from this menace of vulnerable third-party components? Believe it or not, there is a government-led effort underway now to develop a standard for developing and integrating SBoMs. The effort is led by Dr. Allan Friedman of the National Telecommunications and Information Administration of the US Department of Commerce. He is leading a large collaboration with industry and universities to develop a standard for generating and accessing software bills of material. This document from NTIA provides a good introduction to the SBoM concept, as well as links to other information about the NTIA initiative.

However, let’s be honest: It will be years before this initiative will lead to widely-available SBoMs. Meanwhile, the threats described above are here now. What can you do in the meantime? I asked two software experts I know for their thoughts on this question. In this post, I’ll describe what George Masters of SEL told me. I’ll discuss what the other expert said in part III of this post.

I know George primarily from working with him last year on the CIPC Supply Chain Working Group. We both led the development of guideline papers. His was on “Risk Considerations for Open Source Software”. The paper is excellent and it (as well as the webinar George presented on it) can be accessed here. I learned a lot just from the meetings he led while developing the paper.

I asked George whether it would be realistic to ask software suppliers to provide at least a first-level SBoM. Here is what he replied:

My advice to people who mention this in conversation is “be careful what you ask for”, because using SBOM information to form a competent risk assessment requires information that a customer just won’t have.  He or she would not, for instance, be able to recognize an older component that had been selectively patched to address particular vulnerabilities, nor would they be able to know about other controls that mitigate an issue, or that vulnerable code may not be reachable at all.  

Given the reliability expectations for power system equipment, “churn” in the code is something to be avoided.  Every change should be carefully reviewed and thoroughly tested.  Entirely updating a component that may contain hundreds of changes involves some degree of risk – and cost - and that has to be balanced against security gains.

Yes, the supplier should be able to demonstrate knowledge of the product composition, and that they are tracking those components’ version releases, but one shouldn’t expect to be able to second-guess them with an SBOM.  If the supplier is competent, this is wasted effort.   Suppliers may also want to keep composition data proprietary, disclosing details only under NDA.  

Reading the foregoing, as you might expect, I’d say that getting beyond a ‘top level’ SBOM would make things worse.

I continued by asking George “Do you think there's value in even asking for an SBoM? If for no other reason than it shows whether or not the supplier even knows what's in their software.” I said this because I’d read another article that said the only real value to be obtained from asking for an SBoM at the current time is to see if the supplier can respond to you at all – that is, whether they even have any idea what the h___ is in their software. If they clearly don’t know that, then you should question why you’re even doing business with this supplier in the first place.

George’s reply was succinct:

I think that’s the *only* value, for the reasons detailed before, though there may be other ways to accomplish its purpose.   In particular, opening up a tracking effort for 2nd level configurations (now tracking versions of device and software components in addition to tracking versions of equipment and software in their inventory) is complexity that utilities don’t need to add to their burdens.

George concluded with this statement of his credo: “I try to be the kind of person my dog thinks I am.” A high standard indeed!

If I can sum up what George said, it’s:

1.      There’s no point in even trying to ask for more than a first-level SBoM.
2.      It’s especially pointless for you to try to track versions of the components in that SBoM (although the supplier should know them!). Because your knowledge of the components and how they’re configured in the software is quite limited, you’re likely to waste a lot of time tracking down new patches and bugging your supplier about applying them, when for example the functionality that the patch addresses isn’t even used by your supplier.
3.      The primary value of asking for the SBoM is to see how much knowledge the supplier has about what’s in their software.

The second expert takes a different take on this. He has a very simple (if you have a basic knowledge of software development, which I don’t) method for, in some cases, being able to generate SBoMs, and he’s also involved with the NTIA SBoM program. Stay tuned.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you hot at work – or should be – on getting ready for CIP-013-1 compliance on October 1? Here is my summary of what you need to do between now and then.


Sunday, August 16, 2020

FERC’s NERC CIP Notice of Inquiry Part II




If you're looking for my pandemic posts, go here.

This is the second part of my post discussing FERC’s Notice of Inquiry on “Potential Enhancements to the Critical Infrastructure Protection Reliability Standards”. The NOI discusses areas in which FERC staff believes the current NERC CIP standards may be inadequate and might need to be enhanced; the NOI calls for comments on these areas. The two broad area that FERC identifies are a) insufficient coverage of the various categories of risk listed in the NIST Cybersecurity Framework (CSF) – the staff identified three categories that may be insufficiently addressed by CIP - and b) the risk of a “coordinated cyberattack on geographically distributed targets”.

The first part of this post leveraged a LinkedIn post by Dale Peterson and delved into the first of those two areas. I questioned (as Dale did in his post) the concept of mining a risk management framework (which is what the CSF is, of course) to develop a set of recommendations for individual requirements, as if the CSF were a best practices guide. In this second part of the post, I will look at some of the “recommendations” made by FERC staff for possible implementation of new requirements based on the CSF. I will also discuss how the risk of a coordinated attack might be addressed. As usual, I will look at these through a Big Picture lens because – hey, I’m a Big Picture kind of guy.

The whole premise of FERC’s examination of the CSF seems to be that the CIP standards already do a fairly good job of addressing a large number of cyber threats to the BES, and that just adding a few more standards or requirements might make it close to complete (unless they plan on issuing one of these NOI’s every six months or so, which I don’t believe is their plan). To that end, they ask a number of questions, related to the three CSF categories which they don’t think are adequately addressed in the CIP standards now. These include:

1.      Protecting information integrity for Low impact BES Cyber Systems (p. 12 paragraph 13). They correctly state that information integrity for Medium and High impact BCS is probably adequately addressed by CIP-011-2 R1 now (which I agree with), but they point out that this doesn’t apply to Low BCS – they ask if there should be an information integrity requirement for Lows. I don’t dispute that this is probably a deficiency, although I question how big a risk this is.
2.      Incident response requirements for Low BCS (p. 13 paragraph 16). Again, this is currently well covered for Medium and High BCS, but not for Lows. They wonder if it should be extended to Lows in some way. I think this might be justified, if this is deemed to be a sufficient risk.
3.      Vulnerability management and mitigation for Low BCS (p 14 paragraph 18). The NOI says that CIP-10 already adequately addresses this for Mediums and Highs. I have no idea what they’re talking about here. As far as I’m concerned, neither CIP 10 nor any of the other current CIP standards addresses vulnerability management at all, except for the single case of software or firmware vulnerabilities for which a patch is available – and CIP-007 R2 goes way overboard in mitigating that threat. Vulnerability management should be addressed for Highs, Mediums and Lows.

But there are still vulnerability management risks that aren’t addressed at all in CIP now, including a) the risk of unpatched vulnerabilities in purchased software (which will be partially addressed in CIP-013 R2.4, and should – in my opinion – be addressed as one of the risks identified in CIP-013 R1.1); b) the risk of unpatched vulnerabilities in third-party or open source components of purchased software (which I also recommend that my clients address in R1.1); and c) the risk of unpatched vulnerabilities in software written by your organization or custom-developed for you (this risk isn’t in scope at all for CIP-013, since that standard only covers products and services that are procured from outside sources. I was at a supply chain cybersecurity meeting in McLean, VA in 2018, at which a bunch of Federal cybersecurity contractors expressed amazement that CIP-013 didn’t cover in-house-developed software).

Regarding coordinated attacks on geographically distributed targets, I think this is also a legitimate area to look into, although I find it almost impossible to believe that this could be anything other than a supply chain attack. And I don’t want to see another minute wasted on a discussion of hardware vulnerabilities that could somehow be built into a bunch of non-intelligent transformers with no connection at all to the outside world – thanks to the EO, we’ve already wasted, and will continue to waste, lots of time on that discussion.

There are two serious threats of coordinated distributed attacks on the grid today. The NOI points out one of them (on page 20 paragraph 27) – the Worldwide Threat Assessment put out by the ODNI, FBI and CIA in January 2019 (which interestingly enough hasn’t been updated yet for 2020, even though it’s supposedly an annual report), which pointed to the statement that the Russians are able to cause temporary outages of several hours at various points in the grid. To be honest, I’m pretty tired of various government officials stating that this is a big threat – which it is – yet also being perfectly willing to leave the whole thing uninvestigated, since no government or industry group has even bothered to try to figure out whether these statements are true or not. Maybe FERC will decide that they should do this (after all, if the statements are true, there’s Russian malware sitting out there in a number of control centers today. Wouldn’t it be nice to know what malware to look for, so the industry could eradicate it?), but I’m not holding my breath at all on this one.

The other serious threat of coordinated distributed attacks on the grid is the software supply chain. There’s at least a possibility that a foreign adversary could have figured out a way to plant backdoors in software that is used at multiple locations on the grid and might be connected in some way to the outside world. It’s at least larger than the pure fantasy that there could be a distributed attack on transformers.

So the NOI points out at least some potentially important threats that might be worth considering for a set of “enhanced” NERC CIP standards. But why stop there? How about addressing some very serious threats to the BES that have been around for a long time, yet nobody – and especially not FERC – has even suggested that these threats be included in CIP? These include ransomware, phishing and advanced persistent threats. I’d say the first two of these are the most serious cyber threats in the world today. Shouldn’t they be considered, too? I could probably quickly think up 5-10 more important threats that simply aren’t addressed at all in CIP today – and I’m sure you could as well.

So here’s the deal: There are lots and lots of threats that should at least be considered if we want to “enhance” the NERC CIP standards. Even more importantly, new ones are appearing all the time. Currently, the only way to incorporate any of them into the CIP standards is by using the NERC standards development process, which includes developing a SAR (standards authorization request), constituting an SDT, going through a number of ballots and comment periods, getting approval from the NERC Board of Trustees, submitting the new or modified standard to FERC, waiting 6-18 months for them to approve it, and finally going through a 6-24 month implementation period. Then voila! Two to six years later, the new standard will be in place.

As an example of the lightning speed at which the standards development process operates, the risk posed by plugging an insecure laptop into a corporate network has been clear since the late 1990’s. When did the first CIP requirement addressing this risk (CIP-010 R4) come into effect? 2017. It took 20 years. Phishing has been a threat more than ten years, and ransomware not much less than that, yet – again – there’s no discussion of even considering a requirement to address these two very serious risks, from FERC or anybody else (and FERC had a conference on ransomware about three years ago). The standards development process is so burdensome that nobody wants to even bring up the idea that there should be a new requirement.

The fact is that there needs to be some group (consisting of NERC entities, NERC staff, FERC staff, perhaps trade organizations like NRECA and NATF, and maybe even representatives from Congress or the general public) that regularly meets to survey the whole range of cyber threats to the BES and decide which are the most important ones. Only by taking a comprehensive look at all the risks can anybody make a decision on what’s important or not.

This group would be charged with regularly updating the list of cyber risks and providing it to all NERC entities (or at least those with BCS) – as well as providing guidelines for mitigating the risks. Each entity would be required to consider each risk and determine the likelihood that it applies to their BES environment. They would need to mitigate the most important risks and document why they don’t think the others are important enough to mitigate. They would be audited on how well they did this (which will be something like how CIP-013 will be audited, assuming that goes well).

Until a group like that can look at the universe of risks as a whole and identify the most important ones, and until the current CIP requirements can all be replaced with risk-based ones, I frankly think it’s a waste of time to even talk about developing new requirements to address a few risks, no matter how important these may seem to FERC or anybody else.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you hot at work – or should be – on getting ready for CIP-013-1 compliance on October 1? Here is my summary of what you need to do between now and then.


Wednesday, August 12, 2020

Is software bill of materials the answer to all of our problems? Part I



If you're looking for my pandemic posts, go here.

I confess I fell hard for the idea of a software bill of materials (SBoM) when I first heard of it about four years ago. If you’re not familiar with the term, you need to understand the problem that I thought it would address:

Nowadays, almost every piece of software you buy is loaded with components that are either open source or come from commercial third parties. Since so much code is available on the web for free or at a small charge, any developer who insists on reinventing the wheel for every piece of functionality they want to include in their software will probably not be in business too long. A 2017 Veracode study found that “83 percent of developers use code components to build web applications, with the average number of components per application (found to be) 73.”

Even more importantly, those components usually themselves have components, those components have components, etc. etc. Of course like any software, each of these components could have vulnerabilities that are only discovered – and maybe patched – years later. 

The recently discovered “Ripple20” vulnerabilities in the Treck TCP/IP library– which is embedded in lots of products, including some used by the power industry – is a great example of this. Guess when Treck first developed that library – 1997! It’s been embedded in just about countless products since then, and those products have been embedded in other products, etc. An article in ZDNet says:

The number of impacted products is estimated at "hundreds of millions" and includes products such as smart home devices, power grid equipment, healthcare systems, industrial gear, transportation systems, printers, routers, mobile/satellite communications equipment, data center devices, commercial aircraft devices, various enterprise solutions, and many others.

Experts now fear that all products using this library will most likely remain unpatched due to complex or untracked software supply chains.

Problems arise from the fact that the library was not only used by equipment vendors directly but also integrated into other software suites, which means that many companies aren't even aware that they're using this particular piece of code, and the name of the vulnerable library doesn't appear in their code manifests.

Moreover, even though the researchers who discovered the problems have identified 19 vulnerabilities in the library (of which four pose the highest level of risk. The CISA advisory is here. Thanks to Kevin Perry for pointing this out to me), they report they’re not even halfway through with their research! These 19 vulnerabilities have all been patched (I assume it’s 19 patches). So the owners of each of those hundreds of millions of devices just need to identify each component of each piece of software (as well as each component of each component, down to say the tenth or twentieth level) on each device that might contain the Treck library, and then apply each of the 19 (and counting) patches to each piece of software. And of course, we’re all real software techies who can easily figure out how to do all of this, right?

Other than that, this should be a pretty easy problem to fix…

And fixing the problem of vulnerabilities in software components – if it can be done – is the subject of Part II of this post. Stay tuned for the exciting conclusion, appearing soon on a blog near you!


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Are you hot at work – or should be – on getting ready for CIP-013-1 compliance on October 1? Here is my summary of what you need to do between now and then.