Monday, May 31, 2021

By the skin of their teeth

Last Friday, the Wall Street Journal carried a very good article[i] on the February grid failure in Texas, written by Rebecca Smith. I’ve appreciated her (and also become friends with her) since she wrote a great article on Russian supply chain grid attacks in 2018. In my opinion, that article had a big impact whose consequences continue to be felt today.  I hope to write a post to elaborate on that cryptic comment soon. 

The main focus of the article is blackstart resources, in particular the power plant units that are required to start the grid again after it’s been completely shut down (e.g. as happened in the Northeast Blackout of 2003). The article provides probably the best explanation of blackstart for laypeople that I’ve ever seen, along with some really great graphics illustrating what happens in blackstart. 

The main point of Rebecca’s article is that a large number of blackstart resources (9 of 13 primary generators and 6 of 15 secondary generators) weren’t operating on the critical morning of February 15, when the ERCOT grid came within five minutes of completely shutting down. Because of this, it’s very possible the grid would not have been completely operational for weeks or even longer.  The article says: 

Pat Wood III, former head of the Public Utility Commission of Texas and former chairman of the Federal Energy Regulatory Commission, or FERC, said the poor performance of black starts in Texas stunned him. Were there an uncontrolled grid collapse, whether from extreme weather, a cyberattack, or some other cause, Mr. Wood said, they are “what keeps us from going back to the Stone Age.”

Because my friend Kevin Perry – former Chief CIP Auditor of the SPP Regional Entity and previously member of the IT leadership team for SPP itself – had provided some excellent insight on blackstart and the Texas grid collapse in this post and this one, I asked him to comment on the article. Here is what he said (with my comments in italics):

The article is mostly on the mark.  Except for renewable resources (hydro, solar, wind), blackstart units need fossil fuel.  Some are oil or diesel; many are gas combustion turbines that need natural gas.  And, in some cases, the blackstart unit is actually a commercial generator (500 KW to a MW or two) that is just enough to start a combustion turbine.  The turbine then provides the power to start the thermal units.  Regardless, if you don’t have fuel, the units cannot run.

 

I am still having trouble accepting the idea of a months-long outage if the blackstart units cannot all start up.  The blackstart unit supplies the 40-60 MW power needed to run the station auxiliaries and fire up the big thermal units.  But, once you get a big unit online, the blackstart unit’s role is done.  Power from the restarted thermal unit is then used to start additional units/plants as well as serve load (where the distribution infrastructure is intact).  Yes, it is a delicate balancing act, and voltage or frequency instability can bring it all crashing back down again.  

 

My contention is that once you get one or more main thermal units online somewhere in the Interconnection (which in this case means the ERCOT grid, since it isn’t part of either the Eastern or Western Interconnect), you can build cranking paths from an energized TOP (Transmission Operator) to a TOP who has no available blackstart capability for some reason and get a thermal unit started up there.  Once that is accomplished, the TOP can then start up the rest of its area as it normally would.  

 

In an emergency, you do what you need to do to get the grid back up and sort out payment for cranking power served after the fact (there’s a lot of payment sorting-out going on in Texas now as a result of the outages. It will continue for years. Sorting out blackstart – or “cranking power” – payments would be child’s play compared to the huge amounts being fought over now).  And, yes, you will likely need to design a cranking path on the fly, but it shouldn’t take months.  The big concern is that you deplete the substation batteries before you can get power restored, further complicating matters.  I don’t know, maybe that is where the months-long estimate is coming from.

 

And that leads me to the concept of blackstart markets.  It used to be that each TOP had to have blackstart units or defined contracts with an adjoining utility, per EOP-005.  Where there is a blackstart resources market (as in Texas), blackstart owners now bid their capabilities into the market and the selected providers are compensated for making the capabilities available in case they are needed.  Restoration plans for the market participant TOPs are now built based on the market results.  So, there are likely (or at least were) more blackstart units than what the market requires.  Because units have to be bid and not every bid is successful, there is an incentive for repeatedly unsuccessful bidders to take their units out of service, reducing options in an emergency.

 

Back when the CIP standards were being modified in accordance with Order 706, blackstart owners currently in an ISO/RTO with a blackstart market came out and asserted that if their blackstart units were subject to the CIP standards (the pending Version 4 at the time), they would not bid their units into the market.  Period.  End of discussion.  Their argument was that they would not spend huge dollars to comply with the CIP Standards only to not be selected for the market (and no longer need to comply) with the next bid cycle (1-2 years). Today, blackstart units are specifically designated as low impact, partly because of this issue.

Of course, Kevin’s last paragraph is quite interesting. He points out that, despite the fact that blackstart units should be considered critical facilities, they aren’t considered such by the CIP standards; they are “low impact” and aren’t subject to the much-more-stringent requirements that “medium impact” plants are subject to.[ii]

It’s very hard not to sympathize with the blackstart plant owners when they say (as in Rebecca’s article) that they couldn’t afford to pay the cost of complying with CIP on the medium impact level, given that they have to bid into a cutthroat market in order to have the blackstart designation in the first place. Why bother to even bid, if the return you get won’t make up for the additional CIP compliance costs (as well as costs for compliance with NERC’s EOP-005 and EOP-006 standards for blackstart units)?

So here’s the real problem: Blackstart plant owners aren’t compensated for the cost required to “harden” themselves against threats that would prevent their plants from being available when needed. There are two ways to deal with that: the “free market” solution (currently in effect in Texas and many other states) and the “regulated” solution.

In a pure free market solution, if a class of resources (in this case, blackstart generators) isn’t adequately compensated for their product, they will go out of business (or withdraw from the market, in this case). This will reduce the supply of that product, resulting in a shortage that will cause the price to rise. The rise in price will cause more companies to get into the market (or companies that had withdrawn to return), and the price will settle at a more sustainable level. And we’ll all live happily ever after.

Of course, this is a pretty good description of the system that was in place for the Texas market as a whole, and – in case you didn’t notice – that didn’t work too well in February. And, had the low frequency event in the early morning of February 15 not finally been dispelled by load shedding (i.e. putting people in the dark and cold), it’s very likely that the system wouldn’t have worked too well for blackstart resources, either – in other words, they wouldn’t have been there when needed to restore the ERCOT grid from a widespread shutdown. Perhaps people in Texas would still be in the dark today – although at least they wouldn’t be cold now.

So obviously there needs to be some regulated solution, where blackstart resources would be compensated through the rate base and they would be allowed to spend a reasonable amount to meet both their security and their compliance costs, including the cost of storing an adequate amount of fuel to enable them to do their job when they’re needed. In turn, they would be inspected regularly (as required by EOP-005). If you look at all the problems faced by the power grid nowadays, this looks like one of the easier (and cheaper) ones to remediate.


[i] You probably won’t be able to read the article, unless you’re a WSJ subscriber or you want to sign up for the free trial. But if you drop me an email, I’ll send you a PDF of the article.

[ii] On the other hand, very few power plants of any size have medium impact BES Cyber Systems (which is the only other level applicable to plants. Only large Control Centers can have high impact BES Cyber Systems). Almost all BES Cyber Systems (BCS) in the power plants are low impact, regardless of the plant size. And technically, the CIP standards apply to BCS, not assets like plants, substations and Control Centers.

Friday, May 28, 2021

The Executive Order and CIP-013

Last Friday, Christian Vasquez published in E&E News a good perspective on how the recent Executive Order will impact the electric power industry. Christian quoted three people in the article: myself, Norma Krayem of Van Scoyoc Associates, and Patrick Miller, who shouldn’t need any introduction to the readers of this blog (but just in case you do, he’s U.S. coordinator for the Industrial Cybersecurity Center, although he has a long and distinguished bio).

What I found surprising was that both Norma and Patrick focused on compliance issues, while I – who used to be known as someone who couldn’t say five words without two of them being “NERC CIP” – never even thought of the EO as in any way contributing to the compliance burden for electric utilities.

In the article, I pointed out – and in retrospect I could have made my point clearer with a few additional words – that the EO puts some pretty strict requirements on software suppliers to the federal government. Since there are very few US-based software suppliers who don’t sell both to the government and the private sector, and since no supplier is going to build separate products using different security standards, this means those strict requirements will protect electric utilities, as well as every other public and private organization. A nice “side effect”, and one that was surely intended.

In fact, I suggested that utilities turn the requirements in the EO (specifically, in Section 4 of the EO) into questions to ask their software suppliers. For example, if the EO tells the suppliers “Do X”, you can ask your suppliers “Do you do X?”. And the reason this is important is that the EO is very much focused on risks that arise in the software development process (not surprising, of course, since the SolarWinds attack that led to planting the Sunburst malware in Orion updates took place entirely within the SolarWinds development environment – using an amazingly elaborate process I discussed in this post). 

Software development risks have been almost completely ignored in just about any supply chain cyber risk framework or regulation you look at: DoD’s CMMC, NIST 800-161, the North American Transmission Forum’s questionnaire and criteria, the NERC CIP-013 Implementation Guidance, etc. So it’s about time they came front and center. And IMO the best news about the EO is that it essentially extends software development security requirements to suppliers to private industry, without making enforcement of these requirements the private sector’s responsibility (which is the approach taken by NERC CIP-013, since neither NERC nor FERC has any jurisdiction over suppliers to the power industry). The federal government does all the heavy lifting, and the private sector reaps a lot of the benefits.

Of course, not everybody sees this the same way as I do. In particular, the article says this about Patrick (who I’ve been friends with for at least 12 years): “Miller said NERC's rules for utility supply chain cybersecurity are vague enough that an auditor could use the order to add unwritten requirements. ‘If you're a utility trying to do supply chain, this just adds more confusion and uncertainty into your supply chain efforts going forward,’ Miller said.”

I do want to point out that Patrick was hardly complaining bitterly about this, since he went on to say “It's all good stuff, but there's a bunch of confusion still.”

I don’t disagree with Patrick that there’s lots of confusion about the NERC CIP supply chain cybersecurity standard, CIP-013. But I think the EO will reduce that confusion, not contribute to it. Here’s why I say that:

1.      CIP-013 is the first true risk-based NERC standard (some people argue that CIP-014 is risk-based, and to some degree it is. But IMO there’s a big difference, which I won’t go into here. The new CIP-012 is also risk based, but CIP-013 got there first ). By “risk-based”, I mean that it’s up to the NERC entity to decide how to allocate the resources they have available for supply chain cybersecurity risk mitigation. They have to decide which are the most serious risks in their vendor’s environment, and focus their efforts on mitigating them. That way, they get the most bang for their buck.

2.      Of course, this is exactly what should happen, and I don’t have any problem with what’s written in CIP-013. However, I have a big problem with what’s not written there: CIP-013-1 R1.1 tells the NERC entity to “identify and assess” supply chain cybersecurity risks, but it gives no guidance at all on where to look for those risks. It doesn’t even provide a set of categories of risks that the entity should consider, including risks arising from the software development process.

3.      Of course, there are lots of third parties that have provided lists of risks (often in the form of questionnaires) – and the Implementation Guidance provided by the drafting team in 2017 listed a lot of risks for the entity to consider.

4.      However, the problem with all of these lists of risks is none of them are part of the CIP-013 standard, or even referred to in it. If they were, the entity would be able to put in place mitigations for these risks (and if some risks don’t apply to an entity, they could just document why that’s the case). Then they could sleep at night, knowing they were compliant with what CIP-013 asked for.

5.      But as CIP-013 stands now (and will undoubtedly stand for at least a few years), the only guidance on risks to be addressed is R1.2, which lists six mitigations that must be included in the entity’s supply chain cyber risk management plan (even though each of these is a mitigation, they can all easily be reworded as a risk). These were included in CIP-013 not because the drafting team thought they were the six most important risks to address, but because FERC had mentioned them at various random places in Order 829 in 2016 and required that they be included in the NERC entity’s supply chain cyber risk management plan. The drafting team just decided to group them together in one place.

6.      But we all know that there are many other categories of supply chain cybersecurity risk that you should at least consider in your plan. Clearly, software development risks are one category. Others are risks arising from insecure purchasing or installation practices on the entity’s part, improper identity management or patch management practices on the vendor’s part, improper controls on the vendor’s subcontractors, etc.

7.      If the standard provided a list of these risk categories, there wouldn’t be nearly the degree of uncertainty that Patrick points to. The NERC entity would just need to consider risks pertaining to each of these categories (ideally, there would also be a list of risks that pertain to each category, although it could never be exhaustive. These might be provided as part of the CIP-013 standard, or as a separate document).

8.      In each category, the NERC entity would identify risks that have a high likelihood of being realized in their environment. And if all the risks in a particular category have low likelihood (e.g. risks having to do with vendor remote access, for a utility that doesn’t allow remote access in the first place), they would just need to document this fact. At this point, the entity has identified supply chain cybersecurity risks to the Bulk Electric System.

9.      How about assessing those risks? The entity needs to document that they considered the risks for each category and assessed them as to their likelihood of being realized and their impact if realized, since the degree of risk is determined by the degrees of likelihood and impact (BTW, I don’t see any way likelihood and impact can be assessed to any higher degree of accuracy than just “high” or “low”. I think it’s a waste of time to try to assign risk scores of 1-5 or something like that). By combining likelihood and impact (either adding or multiplying them), you get the degree of risk.

10.   But there’s a further simplification: When it comes to BES Cyber Systems (which are of course what CIP-013 applies to), I think the impact of their being exploited always needs to be considered high. Of course, sometimes there would be little or no impact from exploitation – as in the case where a relay is attacked and opens a line, but the line itself was already open for maintenance. In other cases, there might be a huge impact, as in the case where a relay opens a key transmission line on a hot summer day, when it’s already heavily loaded. Given that there’s no way to know in advance what the impact will be, it needs to always be considered high – in other words, it should be the high water mark. And if impact is always high, it doesn’t need to be considered in risk assessments.

11.   This means that the only consideration for risk of supply chain attacks on BES Cyber Systems is likelihood. The question becomes: What is the likelihood that this risk will be realized in the supplier’s environment (or in the NERC entity’s environment, in the case of risks that apply to the entity itself)?

12.   For example, consider the risk that a service vendor might not conduct background checks on their new employees. As a result, they hire an employee who happens to be a card-carrying ISIS member. That employee gets remote access to a relay in your substation and opens an important line, causing an outage. The impact of this risk being realized could of course be catastrophic, but what’s the likelihood? That’s what your assessment tries to find out: You ask the service vendor about their policies for vetting the people they hire, including background checks. If they answer that they require all new hires to pass a background check, you assign them a low likelihood score for this risk, meaning the level of risk is also low. You don’t have to do anything more about this risk, with this vendor.

13.   But what if the vendor gives you the answers you want to hear, but you have some reason to suspect they may not be telling the truth? In that case – as well as the case where they honestly answer that they don’t do background checks – you need to give the vendor a high likelihood score for this risk.

14.   You’ll ultimately determine whether the vendor has a high or low likelihood of not doing background checks on their employees. If the answer is “low”, then you can stop worrying. However, if it’s “high”, you should take steps to mitigate this risk, probably by requiring that their employees be escorted at all times while onsite (since you can’t be sure they’ve had a background check). And you might stop using that supplier altogether.

Given this, why do I say the EO makes CIP-013 less confusing, not more? I have two reasons:

1.      As I’ve already said, the EO will make all commercially-available software more secure, not just software sold to the federal government.

2.      Given that SolarWinds (and other recent software supply chain attacks) has shown the importance of including the category of software development risks in your supply chain cyber risk assessments, you can “mine” Section 4 of the EO for questions to ask your software suppliers. For example, on page 13, item (e)(1)(A) requires the supplier to use “administratively separate build environments”. This means the supplier should maintain a software build environment separate from their IT network (just like your OT network – or ESP – is separate from your IT network, and for similar reasons). You can turn this into the question “Do you maintain an administratively separate software build environment?”

3.      Another example: On page 15, item (e)(vii) reads “providing a purchaser a Software Bill of Materials (SBOM) for each product…” You can turn this into “Will you provide an SBOM for each product that you provide to us?”

So I think the EO will make your CIP-013 compliance process easier, not harder. And it will make your organization more secure, as long as the organization uses software. And just about every organization on the planet uses software nowadays.

 

Is it time to review your CIP-013 R1 plan? Remember, you can change it at any time, as long as you document why you did that. If you would like me to give you suggestions on how the plan could be improved, please email me.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Tuesday, May 25, 2021

A few words from Tim Roxey


Anybody who has been at all involved in cybersecurity for the US electric power industry for more than just the last two years knows who Tim Roxey is – and if you don’t know (or you’d like an update on things you didn’t know about Tim – which includes me), see his bio at the end of this post.

Today I received this email from Tim, regarding the post I put up yesterday. He agreed to let me publish it. He’s adding some real perspective on my assertion (which I admit I’m revising now) that our definition of critical infrastructure needs to be expanded to include much more than asset owners in what are traditionally called CI industries – e.g. electric power, oil & gas, etc. It especially needs to include the providers of software and services whose compromise might lead to serious consequences – which could never be adequately redressed through purely legal means – to important government agencies (including agencies involved with national security) and CI industries.

It is refreshing to see the folks are opening up to some concepts like those you mention here. Although I'm in my retirement years, I do still have interests in this issue.

Many years ago, having been involved in many exercises and actual incident responses, I realized that the concept of "what is important" or even "Risk" was not well understood. As automation swept through the industrial control surfaces of the BPS, several trite phrases came to my mind. For instance, if some automation is good, more is likely better, and if more is better, then too much is not enough.

But specific to this issue is realizing that societally significant dependencies were not a thing when these industrial systems were developed.

I made a statement to Mikey (Assante) while working on the first draft of the NIPP (National Infrastructure Protection Plan) that this CIKR (Critical Infrastructure and Key Resources) stuff we were struggling to craft defenses for was far more significant than the owners and operators sometimes realized. 

Imagine the owner's surprise when they learned of a significant defense critical dependency (DCEI – Defense Critical Energy Infrastructure) for their system. Imagine the surprise when an electricity sector owner/operator (AOO – Asset Owner/Operator) learned of some load put onto the grid by a new commercial development adding in a hospital complex (sector-sector interdependencies).

Now imagine the surprise on the adversary’s face as they read that Colonial had shutdown their entire pipeline. HC BFFs we only wanted money.

There were just so many more of these examples that we used the expression: “No one ever told us that all of this would be critical infrastructure, or we would have built it differently.”

Today we have advanced adversaries, exploiting the seams in our systems that we don't even know exist (critical interdependencies), using capabilities only dreamt of in nightmares (for example, an AI-assisted attack). 

So I do stick to my sentiment -  "Had we known then how dependent our society would become on {Insert Thingy} today; we would have designed {Inserted Thingy} differently." Or words to that effect. Complexity, which was once fun, has become our collective nemesis.

Tim’s Bio:

Mr. Roxey has 30 years of nuclear industry experience and over 50 years of computer-related experience. Tim was formerly VP and Chief Security Officer of NERC, and Senior Director of the E-ISAC.

Following each attack against the Ukrainian power systems, Tim spent time in-country, deconstructing physical and cyber-attacks against the Ukrainian electrical grid as part of a public-private sector response team. As part of Tim's fifth trip in-country, The US Embassy in Kyiv asked Mr. Roxey to present his findings and recommendations to the Ukrainian Parliament (their RADA).

As President of Eclectic Technologies, Tim helps advance Grid security by designing simplified, physics-based attack surface disrupters such as the one recently marketed for defense against an asynchronous attack (Aurora is the example). One US-based utility is presently deploying these devices.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, May 24, 2021

This was a warning – but we didn’t understand it

Last week, Wired published a great story by Andy Greenberg on something that happened more than ten years ago: the RSA hack by the Chinese army. The RSA executives who were involved in discovering and responding to the attack – and in the huge effort to reach out by phone to every one of their customers and walk them through the mitigations they needed to put in place – are finally able to talk about it, because their 10-year NDAs have expired.

Tell me, does any of this sound familiar?

1.      A supplier of important IT products to organizations worldwide is attacked, and their primary product is reported to have been compromised.

2.      Because of the nature of that product, it occupies a very privileged position in the customers’ networks. The fact that the product might be compromised strikes fear into everyone who understands what the product does and how it is used.

3.      Because the customers include many federal government agencies, including three-letter agencies, this becomes a national security event - meaning it could be a disaster not only for a large number of private companies, but for every American citizen (and many citizens of foreign countries as well, since this product was by far the market leader). In other words, this was really an attack on critical infrastructure – we just didn’t realize it.

4.      Despite the breach’s potential to literally destroy the company, its leaders responded to it well and took the hard steps necessary to win customers’ confidence back.

5.      On the other hand, because the attackers were inside the company’s network for a long period, they very likely left backdoors – meaning they’ll probably be able to get back in whenever they want.

You’re right! The above applies to RSA, but it could just as well apply to SolarWinds. I’ll let you read the article, but here are some of the key differences between the two breaches:

1.      It seems only three companies – including defense contractor Lockheed Martin - might have been breached due to the RSA attack and even if they were, they swear up and down that the hackers never compromised anything having to do with national security. In the case of SolarWinds, 18,000 customers received the compromised updates, but “only” 1-200 were actually penetrated by the Russians (including a number of federal agencies, of course). None of them have reported any serious losses, although it’s unlikely a federal agency will make an announcement like “We regret to say that the Russians discovered our most closely held secret, namely….” On the other hand, given that the Russians were inside those 1-200 organizations for many months, they have certainly laid the groundwork to remain undetected for years. Who knows what they’ll ultimately do?

2.      The Russians were inside the SolarWinds network – and more specifically their development environment – for about 15 months; even then, they were only detected because someone at FireEye (a SolarWinds customer) happened to ask about a new device that showed up on someone’s account. The Chinese were inside RSA for I believe just a few weeks, and for most of that period, they were tracked in real time by the RSA security staff. Unfortunately, the staff was literally just seconds too late to prevent them from making off with the crown jewels.

3.      The RSA breach was a classic information security breach: the Chinese attackers stole the seeds that supported the process of generating security passwords on RSA hardware and software tokens. It was obviously a very sophisticated attack, yet at heart it wasn’t too different from any other attack where important data is stolen.

4.      The SolarWinds attack was completely different from any previous supply chain attack, in that the software development process itself was compromised, in such a manner that no current technology (other than perhaps in toto, which was just developed in the last couple of years, and only got a lot of attention after the attack) could have discovered it.

5.      After the SolarWinds attack, people began to realize (including me) that the systems and processes used by software developers like SolarWinds are as much critical infrastructure as the systems that run the power grid – and they need to be regulated as such. However, I never expected that I would get my wish so quickly, since the recent EO does exactly that.

Ten years ago, I don’t remember anybody calling for cybersecurity regulation of companies like RSA. Perhaps this was because the idea of any cybersecurity regulation was new (for example, CIP version 1 only fully came into effect at the end of 2010). I believe the CIP standards were among the first cybersecurity regulations anywhere – other than perhaps the military, and breach notification laws in a few states. The general feeling was that, if a private company was breached and it affected their customers, that could be handled well in the court system – there was no need for government regulations.

We seem to have finally learned our lesson with SolarWinds: Just like the hack of an electric utility is more than just a financial issue to be settled between the utility and its customers, the hack of the provider of a vital technology to the federal government – and a huge number of private organizations – is much more than something to be settled with a bunch of lawsuits. As much as possible, it needs to be prevented through regulation. SolarWinds and RSA are both critical infrastructure organizations. They both need to be regulated – as well as a number of other providers of IT/OT products and services to the federal government.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, May 20, 2021

A ransomware attack on IT will (almost) inevitably affect OT as well


In the first post that I wrote on the Colonial Pipeline incident (debacle might be a better word), I pointed out a similarity between that attack and another serious ransomware attack in 2018 – this one on a large electric utility. The similarity was that in both cases, the OT network (the Control Centers, in the case of the utility attack) ended up being shut down, even though the victim organization swore up and down that only their IT network had been affected by the incident.

Of course, a lot of people have argued, in the case of Colonial, that they must be lying: the ransomware actually did penetrate the OT network, so they had no choice but to shut it down. I certainly have no way of knowing whether that’s true or not, but the point is that it really doesn’t matter. Plus I think (just because of what I know about the NERC CIP requirements) that it’s very unlikely the ransomware penetrated the utility’s Control Centers.

But the point is that it doesn’t matter: In the case of Colonial and  in the 2018 case, continuing to operate the OT network while the IT network was recovering from ransomware made no sense. They both had to come down and be remediated.

In the 2018 case, the utility’s reason for bringing the Control Centers down (and since they never admitted they had gone down, I learned this from someone who at the time worked at a neighboring utility) was that they couldn’t take the chance that even one of the Control Center systems had become infected; they had to treat the entire CC network as infected, and wipe and rebuild every system.

In my initial post, I didn’t state Colonial’s reason for bringing the OT network down, since I didn’t know it. However, in my second post on Colonial (this is the third), I noted that WaPo had pointed out that Colonial couldn’t do any invoicing while the IT network was down – so if they had continued to operate the pipeline while the IT network was down, they would have ended up delivering a lot of gasoline (all of it?) for free. The practice of providing your product or service for free is frowned upon in business school classes, from what I hear (it’s a great way to build the customer base, but not such a great way to build profitability).

That alone would be a pretty good reason to shut down the OT network (and it would have been for perhaps most organizations that are hit by ransomware), but I also received a comment on that post from Unknown – who, along with his good friend Anonymous, is one of the two most prolific commenters on my posts. Unknown pointed out that “Pipelines are like banks and oil in the pipeline is like cash in the bank. If a bank loses its ability to track who gave them cash (or who they loaned it to), then there is no point opening the doors, even if they can safely store the money in the vault.”

Of course, this makes all the sense in the world. Colonial doesn’t own the gasoline and other products that are delivered by its pipelines; it just charges a fee for delivering them. If it can’t track who delivered products to its pipeline, or who received them on the other end, then it’s going to be on the hook for the full cost of those products – just like a bank can’t tell their customer “We’re sorry, but all of your money is gone. We have no idea what happened to it. But don’t even think of suing us, since we’re not liable for this. We apologize for any inconvenience this might have caused you.”

So here was another very good reason why the OT network might have been shut down. That made three reasons (including the one from the 2018 incident) why an OT network would be shut down if the IT network were compromised by ransomware. I thought I’d developed a complete catalogue of reasons.

However, on Monday I read this article in Utility Dive that pointed to another important reason why the OT network would have to come down if the IT network did: If the ransomware did jump to the OT network, it might cause an uncontrolled shutdown of operations, which of course can be very damaging. By proactively shutting operations down according to the required procedures, Colonial avoided this outcome.

But the article brought up another reason as well: Continuing to operate the OT network while the IT network was still infected (and remember, Colonial paid the ransom to unlock their IT systems and start running again. Just decrypting a system doesn’t remove the ransomware) raised the risk that the ransomware would jump to OT. A prudent organization, before they decided to leave their OT network running, would have to ask: Do we trust our security controls enough that we’re sure we’ll prevent the ransomware from getting into the OT network?

Colonial might well have decided that the answer to this question was No, and with good reason: Because so much having to do with OT (ICS) security is relatively new and untested, it would be very hard for even the most seasoned ICS security professional to say – with a straight face – that he or she is 100% sure that the ransomware won’t jump from IT to OT, possibly leading to a catastrophic uncontrolled shutdown that might leave the whole system down for a long time.

But the article wasn’t finished with possible reasons for shutting down the OT network when the IT network succumbs to ransomware. For another reason, it turned to Tim Conway of SANS, who is quoted (in a webcast last Thursday) saying “(If you consider the networks to be a bookshelf, with IT systems on one end and OT systems on the other), there's a whole bunch of stuff that lives…in between (the IT and OT systems).” This “in-between zone” includes both IT and OT assets, as well as a lot of business intelligence data. Tim gave the example of the “manufacturing execution system (MES)”, as one of the systems that sits in this zone on the bookshelf.

The article continues:

Conway suggested using 2017's NotPetya ransomware attack as an example: Maersk was a collateral victim of the malware, but the shipping company didn't have any issues on the "far end" of the OT side, such as crane control or maritime shipping. 

"But if the issue was, they didn't know what was inside the containers, that impacts all of its operations," he said. It would then complicate labeling the NotPetya attack as one on IT or OT. 

This is an excellent example of another attack (which literally almost sunk – pun intended – Maersk) on what was technically the IT network, which required the organization to bring down their OT network as well. It makes no sense to continue shipping operations, when there’s no way to know what you’re delivering and to whom. In fact, this is almost exactly analogous to one of the reasons why Colonial might have shut down their OT network, since they’re in the (liquid) goods transportation business as well.

I’m sure there are other reasons why a company might need to shut down their operational systems, even if only their IT systems are brought down by a ransomware attack. So it seems clear to me that it’s almost inevitable that what I call an “operations-focused” company – as opposed to an information-focused company like an insurance company or a consulting firm – will be forced to bring their OT network down if their IT network falls victim to a ransomware attack.

You can refer to this from now on as “Tom’s First Law of OT networks”. If I come up with another, I’ll be sure to let you know.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, May 17, 2021

The SBOM PoC’s web site is up!


I’m pleased to say that Idaho National Laboratories (INL) has put up an engaging web site that will serve as the one-stop-shop for the Energy SBOM (software bill of materials) Proof of Concept program. The program is co-sponsored by the National Technology and Information Administration (NTIA) of the Department of Commerce and the Office of Cybersecurity, Energy Security and Emergency Response (CESER) of the Department of Energy. INL is the home of DoE’s CyTRICS program, whose leader, Virginia Wright, is co-leader (with me) of the PoC.

One type of information you can find on the site is the times and connection information for upcoming meetings, of which the next one is this Wednesday May 19, from 12-1 PM ET. Note that we plan to have bi-weekly meetings at the same time on Wednesdays, so the next one will be June 2 (I can’t believe June is coming up, since it seems Chicago just crawled out of a brutal February).

The PoC ‘s kickoff meeting was on April 26 and the video should be available shortly. In fact, videos and meeting notes from all meetings will be available on the site, as well as links to various articles of interest (plus videos of the four informational webinars we conducted from January through April). And since we’re planning on conducting an active hands-on educational program during the PoC, you can be sure some of that will be facilitated on the site itself.

We definitely need a web site, since interest in the PoC is much higher than I anticipated (at least for this stage of the program). On the “user side”, we have 32 power market participant organizations (mostly utilities) and industry organizations (e.g. EEI and NATF). On the “supply side”, we have 14 suppliers of software, intelligent devices, or tools for security management.

And there are a number of government agencies and consulting firms of various types – as well as a few people who seem to be just curious about SBOMs. That’s one of the best things about this PoC: there will be very little that we discuss that applies uniquely to the energy industry. Why, you might invite your neighbor who’s in insurance to join you at the meetings! As long as they don’t try to sell me more life insurance.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Saturday, May 15, 2021

Heard any good Ercot (or PUCT) jokes lately?


Last Saturday, the Wall Street Journal published an article that would seem to be more likely to appear in that other august publication, The Onion. It was titled “As Texas Went Dark, the State Paid Natural-Gas Companies to Go Offline”.

Of course, it would have been much better for the citizens of Texas if the article had appeared in The Onion, but unfortunately that wasn’t the case. This is something that actually happened, although the headline was wrong in saying that “the State” made these payments.

In fact, it was the Electric Reliability Council of Texas (Ercot) that bears most of the blame for this sordid event (on the other hand, they deserve praise for averting what could have been a much more serious crisis, due to the actions they took during what must have been an incredibly tense nine-minute period early in the morning on Monday February 15, which I described in this post. But on the other other hand – I believe I’ve run out of hands! – they deserve lots of blame for maintaining the $9,000/mwh electricity price after the Texas Public Utility Commission decided that the current $1,200/mwh market price wasn’t quite high enough for them later that same day. Ercot maintained that price through Friday, even though the market price went down to its normal level of about $25/mwh by Thursday. That sordid story is described in this post).

How did this happen? If you’re familiar with the power industry, you probably know about “demand response”. This is the name for programs that incent power users to curtail their use of power during periods of high demand or constrained supply. This particular Ercot program paid industrial users to shut down operations.

You can probably see the problem already: At the height of the crisis on May 15, Ercot activated their demand response program, believing this to be a lesser evil that would avert a greater evil: mandatory power shutoffs (with no compensation, of course).

I don’t need to tell you that the DR program didn’t work, and mandatory shutoffs happened anyway. In fact, ultimately four million Texans went without power, many for days. But in theory, this was the right thing to do, even though it ultimately didn’t fix the problem of a shortage of power relative to demand (which had spiked because most Texans heat with electricity). However, it’s likely that the activation of the program made the total problem much worse, since some of the participants in the DR program were natural gas producers and gas distribution (pipeline) companies. And a lot of power plants in Texas run on natural gas.

Moreover, a lot of gas production – e.g. in the Permian Basin – was already shut down because the wellheads froze. And even if a wellhead wasn’t frozen, it’s possible that the pipeline carrying its gas to power plants and other users was frozen, because in Texas – unlike in colder climes – water vapor isn’t usually removed from the gas before it’s put into the pipeline (maybe that will change now). But now the activation of demand response reduced gas supply and distribution capacity even more, since the companies in the program curtailed operations.

Of course, these gas companies could have resumed production at any time (and certainly, given that the price of natural gas had spiked during the crisis, along with the power price, they certainly should have been tempted to do that). However, the rules of the program meant they would be penalized if they did that.

Of course, if somebody had been thinking at the Texas PUC during this crisis (a pretty tall order, it seems. PUCT regulated the program, although Ercot was in charge of implementing it), they might have thought that maybe – just maybe! - it would be a good idea to suspend these penalties for gas companies. This would allow them to provide gas to their customers, including gas-fired power plants. And that could allow the plants to reopen and help end the crisis more quickly than it ultimately did.

And here I have some good news: PUCT did actually do that! The WSJ says they “…issued a memorandum that said if a facility resumed normal operations ‘because it was providing a critical service or product …enforcement discretion will be exercised.’”

But I also have some bad news: The PUCT didn’t issue that memorandum until Friday, four days after the DR program was activated. And at 9 AM on Friday, “the blackout officially ended and Ercot allowed participants to resume their use of electricity.”

Oh well, it’s the thought that counts.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, May 13, 2021

Executive Order: “Improving the nation’s cybersecurity”


Yesterday, the White House put out their long-rumored EO on cybersecurity. From what I’d read, I thought it would focus on software security. It certainly does that, but it also addresses some other areas of cybersecurity as well, especially incident response. Regardless, every topic covered in the EO addresses an important need – and some of them were needs I hadn’t thought of, for example the “cyber National Transportation Safety Board”, which now that I think of it is a great idea.

In this post, I’m not going to try to summarize the whole order. Rather, I’ll just focus on what’s in Section 4 “Enhancing Software Supply Chain Security”; and I’ll skip some of the items that I find less interesting than the others. There’s a lot in there!

·        On page 13 in item (c), NIST is ordered to issue preliminary guidelines for most of the items in Section 4, within 180 days of the date of the order.

·        In item (e) on the same page, NIST is ordered to issue guidance for those items, although this happens 90 days after the guidelines were issued. The guidance includes “standards, procedures or criteria” regarding:

(i) secure software development environments, including such actions as: (A) using administratively separate build environments; (B) auditing trust relationships; (C) establishing multi-factor, risk-based authentication and conditional access across the enterprise; (D) documenting and minimizing dependencies on enterprise products that are part of the environments used to develop, build, and edit software; (E) employing encryption for data; and (F) monitoring operations and alerts and responding to attempted and actual cyber incidents;

·        It seems to me that all of the above controls were probably inspired by the SolarWinds attack, since the key event (if you want to call a 9-month process an “event”) was when the Russians penetrated the SolarWinds build environment for their flagship Orion platform and implanted the Sunburst malware in 7 or 8 Orion updates. In any case, all of items A-F are important for securing a software build environment, and I certainly support these items being made mandatory for all software suppliers to the federal government (and by extension, suppliers to private industry as well, since few if any suppliers are going to sell one very secure product for government use and one not-so-secure product for everyone else. That just doesn’t work well on a single marketing brochure).

·        The first item I want to discuss is item (vi) at the top of page 15. This deals with what I consider to be a really important area of concern for software supply chain security: The security of what went into building a software product, not just the security of the code written by the developer of the product. Item (vi) includes:

1.      Maintaining accurate and up-to-date data, provenance (i.e., origin) of software code or components;

2.      Controls on internal and third-party software components, tools, and services present in software development processes; and

3.      Performing audits and enforcement of these controls on a recurring basis.

·        Note the second item: Besides controls on components, this items says there need to be controls on tools and services that were “present in software development processes”. Tools and services aren’t components, and they aren’t normally included in an SBOM; on the other hand, a lot of people think they should be included. Note to self: This is something to be discussed in the Energy SBOM Proof of Concept (which BTW will hold their second public workshop next Wednesday, as I described in yesterday’s post. Anyone who uses energy is welcome to attend – and if you don’t use energy, please tell me how you accomplish that feat).

·        Item (vii) on page 15 requires that, starting 270 days from yesterday, software suppliers to the federal government provide a software bill of materials (SBOM) with every software product they deliver (it uses the germ “guidance”, not “requirement”, but in this context I think a federal agency will consider this to be an offer they can’t refuse). And again, I’m sure that suppliers of software to the Feds will deliver SBOMs to their commercial customers as well. How could they not do this?

·        So since SBOMs will be “mandatory” in 269 days, does this mean that the participants in the NTIA Software Transparency Initiative can disband and go home? No, not at all. This is because “providing SBOMs” isn’t a one-off deal. Yes, it is essential that a software supplier provide an SBOM with a new product, but they can’t stop there. Ultimately, they will probably need to provide a new SBOM every time there is a change in the software (for example, when an older component is replaced with a newer version of the same component), or even when just the value of one field in the SBOM changes without any change in the code at all (e.g. the supplier of a component is acquired by another supplier, so their name changes).

·        But this is still an unresolved question; the same goes for a host of other questions about how SBOMs are produced, distributed and especially used. Most of these questions can’t be decided simply through a bunch of wise people sitting around a virtual table and stroking their chins, but through a proof of concept in which these procedures are tested in practice. An NTIA PoC for healthcare has been in operation (with different iterations) since 2018; PoCs for autos and energy are starting up now; and others will undoubtedly follow.

·        So even though delivering a single SBOM will be “mandatory” for software suppliers in 270 days, that is just the beginning. The real goal of the Software Transparency Initiative is for most suppliers to be producing SBOMs (and also VEX documents) as often as needed, which may in some cases be quite frequently. This goal will take many years to achieve, but I don’t doubt that it will be achieved at some point. Meanwhile, the initiative needs to keep testing the procedures in PoCs and documenting the results.

·        Another interesting item on page 15 of the EO is “(x) ensuring and attesting, to the extent practicable, to the integrity and provenance of open source software used within any portion of a product”. It’s just about certain that the majority – one study says 90% – of software components are open source. So this is obviously an important requirement. I’ll point out that it’s very fortunate that the words “to the extent practicable” are included here, since this won’t be easy to comply with. Of course, knowing what open source components are included in a software product requires an SBOM.

·        One item on page 15 specifically calls out NTIA: “(f) Within 60 days of the date of this order, the…Administrator of the National Telecommunications and Information Administration shall publish minimum elements for an SBOM.” This should in theory be easy, since the minimum elements for an SBOM were described in this document in 2019: Author name, Supplier name, Component name, Version string, Component hash, Unique identifier and Relationship. However, I know there’s still some disagreement about whether this is the right list, so I’m sure this will be a topic in some NTIA meetings over the next two months. The nice aspect of this is that the question will without a doubt be settled after 60 days!

·        Item (g) on page 15 reads “Within 45 days of the date of this order, the Director of NIST…shall publish a definition of the term ‘critical software’…That definition shall reflect the level of privilege or access required to function, integration and dependencies with other software, direct access to networking and computing resources, performance of a function critical to trust, and potential for harm if compromised.”

·        Furthermore, item (h) on page 16 reads “Within 30 days of the publication of the definition…the… Director of CISA…shall identify and make available to agencies a list of categories of software and software products in use or in the acquisition process meeting the definition of critical software…” So NIST will define critical software and CISA will decide what software that definition applies to in practice.

·        I find the above quite interesting. Anybody who has been involved with NERC CIP compliance knows that the foundation of that compliance is identification of “critical cyber assets” (i.e. particular servers, workstations and integrated devices like relays and firewalls. Note that this term was used in CIP versions 1-4. The term starting in v5 is BES Cyber Systems, but it's not essentially different from "critical cyber assets"); these are what the protections required by the CIP standards apply to (and I’m sure most other cybersecurity standards follow an apporach like this to determine applicability). However, in the EO, the assets being protected are software assets, which would normally be run on generic Intel-standard servers and workstations.

·        But what about integrated, sealed devices, like Cisco™ firewalls or Schweitzer™ relays? The software running in them can certainly be considered critical, but a user of such a sealed device will usually never know what software is inside it - and to do that, they would need an SBOM!

·        Are these devices really out of scope for the EO? Maybe CISA will address this issue by not categorizing as critical any software that runs on a sealed device. But doesn’t that strike you as a cop out? For example, it would be very hard to argue to a substation engineer that the software running on a protection relay isn’t critical to the safe functioning of the power grid.

·        We’ll see what happens with this, but meanwhile it seems there may be a serious flaw in the EO, in that it doesn’t seem to know how to deal with intelligent devices, just software sold separately from hardware. Of course, there would be various way to remediate this problem; it doesn’t have to sink the whole EO!

·        What happens once critical software is identified? Item (i) on page 16 reads “Within 60 days of the date of this order… the Director of NIST…shall publish guidance outlining security measures for critical software…, including applying practices of least privilege, network segmentation, and proper configuration.”

·        Note that here we’re no longer talking about measures that software suppliers have to take, but rather that federal agencies have to take to secure “critical software”. In other words, Section 4 of the EO applies both to suppliers and users. Of course, there’s nothing wrong with this approach. Sometimes, the best mitigations for supply chain cyber risks are the mitigations that are applied by the user organization itself.

·        Now we jump over a couple pages of discussion of implementation to item (r) on page 18, which reads “Within 60 days of the date of this order…the Director of NIST…shall publish guidelines recommending minimum standards for vendors' testing of their software source code, including identifying recommended types of manual or automated testing (such as code review tools, static and dynamic analysis, software composition tools, and penetration testing).”

·        In other words, rather than try to specify up front how vendors should test their source code (a best practice for software developers, but one that’s seldom mandatory), the EO requires NIST to recommend types (note the plural) of tools and methodologies for testing. This is how security requirements should be written.

·        Items (t) and (u) on page 19 require NIST, within 270 days of the order, to develop a consumer software labeling program – i.e. some way of classifying consumer software by categories ranging from “You have nothing to fear from this software” to “This is toxic s__t”. I wish NIST good luck in developing this program, but the White House may have just handed them the tail of a raging tiger and told them to capture it and lock it in a cage. This probably won’t go well, but I admire the WH for even being willing to try it.

·        Finally, I point you to item (j) on page 31. This is a “definition” of SBOM, which goes much beyond what a normal definition would. It seems that somebody – perhaps Allan Friedman? – decided to use this definition as a teaching tool, even more compact than the recently-produced SBoM at a Glance document (which I also recommend you read). 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, May 12, 2021

Next week might as well be SBOM Week


Next week might as well be called SBOM Week (or maybe “Take your SBOM to work week”), because of all the events that are taking place – four of which you can attend or participate in! Here they are:

1. Joe Biden kicks the week off

It is widely rumored (and was even mentioned in a webinar that I attended last week) that the new Executive Order for software security (although it might not have exactly that title) will be released next week, with Monday the most likely day for it to happen. It’s further rumored – in fact it’s just about certain – that SBOMs will play a prominent role in the EO. Because the order will focus on measures to improve software security of federal agencies, it’s likely the EO will encourage federal agencies to require that software suppliers provide SBOMs with their products. Of course, if suppliers start producing SBOMs for the Feds, they’ll produce them for all of their customers.

However, what I’m hoping the EO doesn’t do is require suppliers to the Feds to start providing SBOMs anytime within say the next 24 months. There are still far too many questions that need to be worked out about how SBOMs will be produced, distributed and used – and most importantly there’s a lack of appropriate software tools that would allow most organizations to easily utilize SBOMs for such purposes as vulnerability management. In this post, I said that it would be best if the EO just said that a rulemaking would begin in say two years. This will give industry (and by the way, the EO will apply to all federal agencies, not just those having to do with the energy industry) time to come to at least rough agreement on the rules of the SBMOM game (see the next item below).

On the other hand, by saying that some sort of mandatory requirement for SBOMs will come at some time in the future, that alone will provide a big wakeup call that SBOMs need to be taken seriously by a) any organization that produces software (for its own use, or use by other organizations), and b) any organization that uses software. Which pretty well covers every organization on the planet - although obviously it’s the larger ones who will derive the most benefit from having SBOMs available.

Another very likely component (no pun intended) of the EO is that it will identify the National Technology and Information Administration (NTIA) of the US Department of Commerce as the lead organization for working out the “rules of the road” for SBOMs. This won’t be any surprise if it happens, since I know of literally no other venue worldwide where rules and procedures for SBOMs are even being discussed (and many people from Europe and Japan regularly participate in the NTIA meetings.

Note from Tom 7:18 PM CT on Wednesday: I should have said the EO will be out "by Monday", not "on Monday". I just received it from Mark Weatherford and I'm going through it now. I'll have a post out tomorrow. It definitely has SBOMs in it and it definitely seems to require them for software sold to the government. When? As of the effective date. Nothing about waiting a year or two. Oh well...

And that leads to the next event in SBOM Week:

2. SBOM Energy Proof of Concept workshop

As I’ve mentioned many times (including in this post from last November), the NTIA’s “laboratories” for developing agreement on rules of the road for SBOMs are Proofs of Concept. The first of these – which is still ongoing, although in a further iteration – was for the healthcare industry (it started in 2018). Now an energy PoC is starting, as well as an Autos PoC (where the big automobile manufacturers are the “consumers” of various electronic components that go in cars, and of course the component manufacturers are the producers. I’m looking forward to the day when I’ll be able to choose a car not just based on whether it has a sunroof, but on how many unpatched software vulnerabilities are found in it).

The energy PoC is starting with a series of workshops – that are open to everybody, as long as you’re a user of electricity. These will discuss what is already accomplished as far as SBOM rules of the road are concerned, as well as what remains to be accomplished. These will not mainly be presentations, but will be quite interactive, including discussions and demonstrations of tools, etc. They’ll be recorded, with the links posted on the PoC’s in-progress website, hosted by those good folks at Idaho National Laboratories (DoE is a co-sponsor of the PoC, along with NTIA).

But note that the goal of the PoC isn’t just education but collaboration – in which software suppliers and users jointly work out the rules of engagement for SBOMs. Fortunately, the energy PoC will be able to build on the 2+ years of experience of the healthcare PoC, but there’s still a lot to be worked out (plus we’re not on any obligation to slavishly follow what the healthcare folks have done. There are definitely differences between the industries, which might well require differing approaches to SBOMs).

The second PoC workshop will be next Wednesday from 12-1 ET (and that will be the time for our bi-weekly meetings going forward, based on the results of the Doodle poll that we sent out after the first workshop). The connection information is:

Date: May 19, 2021

Time: 12pm-1pm ET

Teams link: https://teams.microsoft.com/l/meetup-join/19%3ameeting_MDU1NGVlMGUtZmIwYi00OWUxLWIxZjItNjc5ZDY4ODJlMzI4%40thread.v2/0?context=%7b%22Tid%22%3a%22d6cff1bd-67dd-4ce8-945d-d07dc775672f%22%2c%22Oid%22%3a%22a62b8f72-7ed2-4d55-9358-cfe7b3e4f3ed%22%7d 
Dial-in: +1 202-886-0111,,114057520#  
Other Numbers: https://dialin.teams.microsoft.com/2e8e819f-8605-44d3-a7b9-d176414fe81a?id=114057520

There’s no sign-up for the workshop, but you should make sure you’re on the PoC mailing list. To join (or if you want to confirm you’re already on the list), send an email to sbomenergypoc@inl.gov.

Most of the meeting will be devoted to a discussion of the draft PoC charter that the two co-leaders (myself and Ginger Wright of INL) and Dr. Allan Friedman of NTIA have prepared. We very much want to have the final charter reflect what the group wants it to reflect, so we think it’s important to have this discussion.

3. RSA

Next week is the RSA Conference, which is virtual this year. There are four conference events that will address SBOMs.

The first of these is a presentation by two active participants in the NTIA Software Component Transparency Initiative. One of these is Sounil Yu, CISO of JupiterOne. The other is Josh Corman, a very well-known cybersecurity consultant and researcher (now with CISA) and the developer of the concept of software bill of materials (as well as the name). Their topic is “How CISA Is Charting a Path Toward Defensible Infrastructure”. Knowing both presenters, I can assure you there will be intelligent discussion of SBOMs, and their importance as a component of defensible infrastructure. Their presentation is from 12:45 to 1:25 PM Pacific Time next Tuesday.

The second event is what looks like a really interesting panel discussion entitled “Challenge Accepted: 3 Experts, 3 Big Ideas on Supply Chain Security”. It includes a stellar lineup: Dr. Allan Friedman, Director of Cybersecurity Initiatives of NTIA, Alyssa Feola, Cybersecurity Advisor to the GSA, and Matt Wyckhouse, founder and CEO of Finite State. This discussion will be from 1:30 to 2:10 PM PT on Tuesday (that is, five minutes after Sounil’s and Josh’s presentation ends. Fortunately, you don’t have to worry about running from building to building in the Moscone Center for this year’s RSAC!).

The third event is a panel discussion entitled “DBOM and SBOM: New Options For Better Supply Chain Cybersecurity”.  The panel is led by Mark Weatherford, former NERC CISO (and much more), and includes Jennifer Bisceglie, Founder and CEO of Interos, Chris Blask, Global Director, Industrial and IoT Security, Unisys, and somebody else….oh yes, me[i]. This discussion is from 2:40 to 3:20 PM PT next Thursday 5/20.

I believe that all three of the above sessions were pre-recorded (I know the third one was!), so instead of an interactive Q&A session at the end, there will be Q&A through the chat. You’ll be able to submit questions before or during the session, and the session presenters will be answering them during the session (since they don’t have to talk). I believe there might be a little time at the end for live answers, when the presenters will reply live to a few of the chat questions.

The fourth event occurs immediately after the third one. It’s called a “Live Deeper Dive”, and it consists of Q&A and “engagement” with the four participants in the panel I’m in. Of course, this was not pre-recorded, and it won’t be recorded for later playback. This runs from 3:25 to 3:50 PM PT (i.e. it starts five minutes after the third event ends. Note you need to register for both events).

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] You’ll note that my bio says that my book Supply Chain Cybersecurity for Critical Infrastructure is available now on Amazon. I wrote this in January, when I thought it was just about certain it would actually be available in May. However, I’ve learned to my great surprise that nothing in life is certain, and it’s in fact not available yet (although it’s certainly close to completion). I do expect to have it available in the summer.