Tuesday, June 29, 2021

It turns out SBOMs are more “required” than I thought

Last Friday, NIST published the definition of “critical software” that the May 12 Executive Order required them to develop. I confess that I looked through it briefly, satisfied myself that NIST hadn’t taken the very helpful advice I’d provided them, and put it aside – promising to myself that I’d write a post this week complaining about how nobody ever listens to me.

However, this morning Miriam Baksh, NextGov’s excellent cybersecurity reporter, published a good article that made me go back and look at NIST’s definition. I realized that it actually does at least pay attention to the two concerns I raised in my post linked above – although I sincerely doubt this was because of that post (I also did submit comments to NIST on this subject, although I don’t think they were the reason, either).

But what really caught my eye was something I hadn’t noticed when I skimmed through the definition on Friday. The definition reads

EO-critical software is defined as any software that has, or has direct software dependencies upon, one or more components with at least one of these attributes:

  • is designed to run with elevated privilege or manage privileges;
  • has direct or privileged access to networking or computing resources;
  • is designed to control access to data or operational technology;
  • performs a function critical to trust; or,
  • operates outside of normal trust boundaries with privileged access.

As far as the bullet points go, they more or less reproduce what the EO “suggested” should be addressed in the definition of critical software (i.e. the EO essentially said “NIST, you’re free to develop any definition you want, as long as it’s this one…”). But then I mentally applied my sentence diagramming skills (which I excelled at in Mrs. Llewellyn’s third grade class, I’ll have you know...) to the first two lines and realized how much NIST’s definition depends on the idea of components. Here’s what I mean:

If for the moment we drop the clause about dependencies, the definition reads “any software that has… one or more components with at least one of these attributes…” Why do the words “that has one or more components” have to be there? Why doesn’t it just say “software that has one of these attributes”? After all, the EO’s “definition” of critical software just refers to the attributes of the software itself, not of its components.

Yet NIST seems to be saying that it’s really the components of the software that contain the attributes, not the software itself. Or more exactly, they’re saying that the software is nothing but its components.

It’s true that the average software product contains lots of components. Most components are written by third parties, but the “glue” that holds them all together is written by the actual supplier (i.e. the one whose name is on the product that you buy). However, that glue can really be thought of as just other components – except these are components written by the supplier. This means the product itself is literally just a collection of components; NIST seems to take that attitude here.

The phrase “or has direct software dependencies on…” seems to drive this point home. The link (NIST’s, not mine) points to the FAQ that came with the definition, which says

  1. What do you mean by “direct software dependencies” in the definition?

For a given component or product, we mean other software components (e.g., libraries, packages, modules) that are directly integrated into, and necessary for operation of, the software instance in question. This is not a systems definition of dependencies and does not include the interfaces and services of what are otherwise independent products.

This drives home the point that software consists of components, mostly written by third parties but some written by the supplier themselves. But by saying that software is just components, NIST is saying that all software risks reside in components. Ergo, managing software supply chain risk means managing risk for each component of the software – whether written by the supplier themselves or by a third party.

And how will you, Mr/Ms Software User, find out what components are in your software, so you can manage risks in those components? You got it…you need an SBOM!

The bottom line is that it appears NIST has expanded the SBOM requirement in the EO. The EO requires software suppliers to provide an SBOM to government customers, when the software meets the definition of “critical software”. However, NIST is saying that the source of risk is really the components of critical software, not the software itself.

This means that government agencies aren’t even going to be able to completely figure out their software risks without having an SBOM for each product that they use. It also means that the final determination of whether a software product is critical or not will require having a current SBOM for it.

Will SBOM’s be widely available for most software products by the time all of this comes into effect – in 1-2 years? No. So it’s likely that software risk analysis will continue to be done mostly based on identification of vulnerabilities found in the product itself, not in its third-party components. But will this accelerate the need for SBOMs to become widely available? Absolutely. There’s work to be done.

Speaking of which: Tomorrow you can join the energy SBOM proof of concept, as we learn from people in the healthcare industry that have been working on their PoC since 2018. They have a lot of great lessons to teach us! 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, June 27, 2021

Where’s that wooden spike when you really need it?

It always amazes me that people within and without the electric power industry can get so worked up about very low-likelihood threats, while at the same time huge ones are completely ignored. My case in point is an article in the June issue of Control Engineering, a magazine which often has quite good articles on ICS cybersecurity. The article is titled “Throwback attack: Lessons from the Aurora vulnerability”.

The article is pretty good, up until the last section. It starts with a description of the Aurora test conducted at Idaho National Laboratories in 2007. The test became famous because it succeeded in its goal: get a generator to self-destruct due to a cyberattack. This resulted in a (still) widely-watched video, accompanied by a level of fear ever since that the same thing was going to happen to a large percentage of generators in the US any day now, and we’d all be left in the dark for the next 50 years or so.

I have no problem with the article’s describing that event in detail (in fact, it provides more detail than I’ve seen released in public so far, not that I think this puts the country in any real danger). And the eight steps the author, Daniel E. Capano, recommends that generating plants (and industrial facilities with on-site generators) implement to protect themselves from Aurora attacks are all good practices, although hardly specific ways to prevent an Aurora attack from happening.

But now we get to the last section, titled “Cybersecurity breaches, cautions”. The first paragraph describes Stuxnet, and does a decent job of that – including pointing out that it was a supply chain attack, although the article doesn’t use that term.

But in the second paragraph of that article, the author, Daniel Capano, decides to go into the 2016 Russian attack on the Ukrainian power grid. This attack was on a transmission substation that served part of the city of Kyiv (of course, this is different from the more famous 2015 Ukraine attack on multiple distribution substations). The attack caused an outage of about one hour. There was no other damage reported, although there were some simultaneous Russian attacks on different (non-power grid) targets in the Ukraine. These caused a number of IT problems for the Ministry of Finance, the State Treasury and the Pension Fund.

Of course, a one-hour outage in part of a city of over 2 million people is nothing to be dismissed, and the fact that it was caused by a cyberattack was serious. However, Mr. Capano seems to have gotten his information on the event from comic books, since he says the attack caused “widespread outages and collateral damage.” He continues to say that “an overlooked item” was that “the worm targeted key pieces of equipment such as PLCs and PCs used for…power generation. Several generators were damaged or destroyed using Aurora-type attacks; transformers and substations were damaged using similar techniques.”

The only thing that’s accurate in this passage is that this was an “overlooked item”. It certainly was overlooked – because it never happened. Sure a single substation was damaged, and perhaps a transformer or two in the substation. But no generating stations or generators were either targeted or damaged – and certainly not with “Aurora-type” attacks. The Aurora vulnerability has nothing to do with anything except rotating generation equipment, and there has never been a successful Aurora attack, other than the one conducted by INL. A transformer or substation could no more be subject to an “Aurora-type” attack than my living room sofa could. Transformers don’t rotate at 1800 rpm, like a lot of generators do (although it might be 1500 rpm in Europe); neither does my sofa.

The author evidently decided that the above misinformation wasn’t enough, so he followed it in the next paragraph with the statement that “The Aurora vulnerability sent shockwaves…after it was revealed in 2009” in a FOIA request. A simple Google search would have found plenty of news reports of the test from 2007 (including this video), since it was publicly reported about seven months after it happened. There was no nefarious cover-up by the good folks at INL!

I don’t think Mr. Capano fabricated his story about the Ukraine attack. He was merely following the lead of a well-known consultant who has seemingly blamed Aurora for everything except the Japanese attack on Pearl Harbor. At least three times, I thought I’d finally driven a stake through the heart of this lie, but it keeps coming back. Sad.

However, if you can’t get through the day without worrying about an imminent threat to the power grid, I have a real one for you to chew on: In January 2019, the Director of National Intelligence and heads of the CIA and FBI, in the annual Worldwide Threat Assessment, said the Russians have the ability to bring the grid down “for at least a few hours”, and they’re mapping it so they can accomplish something much worse.

That’s pretty scary, huh? What’s being done about this? What would you say if I told you that this report hasn’t even been investigated? And that the Worldwide Threat Assessment hasn’t even been published since 2019? Is that because there aren’t any more worldwide threats for us to worry about?

There certainly are lots of worldwide threats. But the fact that this report has never been investigated perhaps means that the real threats are domestic. “We have met the enemy, and he is us”, to quote the philosopher I read religiously in my boyhood, Pogo. Where’s Pogo when we need him most? 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, June 24, 2021

The SBOM lifecycle


The SBOM Proof of Concept for the energy sector, conducted under the auspices of the National Technology and Information Administration and Idaho National Laboratories, is kicking off our education phase at our regular bi-weekly meeting on June 30 at noon ET.

The topic of this meeting is the one that proved to be most popular in the brainstorming session we conducted a couple of meetings ago: The lifecycle of an SBOM, including production by the software supplier and use by their customer. We’ll have two very knowledgeable presenters – the two leaders of the healthcare Proof of Concept for SBOMs, which has been going on since 2018. They’re Jennings Aske, VP and CISO of New York Presbyterian Hospital, and Jim Jacobson, Head of Product & Solution Security for Siemens Healthineers, a leading provider of intelligent devices to the healthcare industry.

If you’d like to be on our mailing list, drop an email to sbomenergypoc@inl.gov. And if you’d like to attend the meeting on June 30 (whether or not you’re on the mailing list), here’s the URL:

Teams link: https://teams.microsoft.com/l/meetup-join/19%3ameeting_MDU1NGVlMGUtZmIwYi00OWUxLWIxZjItNjc5ZDY4ODJlMzI4%40thread.v2/0?context=%7b%22Tid%22%3a%22d6cff1bd-67dd-4ce8-945d-d07dc775672f%22%2c%22Oid%22%3a%22a62b8f72-7ed2-4d55-9358-cfe7b3e4f3ed%22%7d 

Dial-in: +1 202-886-0111,,114057520#  

Other Numbers: https://dialin.teams.microsoft.com/2e8e819f-8605-44d3-a7b9-d176414fe81a?id=114057520

Feel free to drop me an email if you have any questions

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, June 20, 2021

“Better late than never” department, Texas edition

On March 5, former FERC Commissioner Tony Clark wrote a great article about the Texas power grid fiasco. I read it at the time and wanted to write a post about it – just as soon as I was done dealing with my own posts about that subject, and then Colonial Pipeline, and then the EO…etc. I just reread the article (really an opinion piece), and I don’t think it’s lost any of its relevancy in the intervening three months.

Here is the article. For each myth, I have comments (and some disagreement) on some of the points he makes. I’ll let Mr. Clark’s writing speak for itself, so please read it now.

Myth: “The market is working, and no grid could have prevented something like this.” I totally agree with his statements here. I love this sentence of his: “No market should threaten the safety and well-being of citizens.” Amen. The reason markets are in place is to serve the needs of the public. If a market isn’t doing that, it should be fixed; and if it can’t be fixed, it should be abolished. But there’s no reason why the Texas deregulated model has to be completely scrapped. Most other states have figured out how to design markets that work for their citizens, not kill them. Texas can, too.

Myth: “If the Texas grid was just interconnected with the rest of the U.S., everything would have been ok.” I certainly agree with him that, if the ERCOT grid had been connected to the Eastern and/or Western Interconnects, the outage probably wouldn’t have been prevented, since Texas wasn’t the only state experiencing this cold wave. On the other hand,  none of the other states experienced problems anything like those in Texas. They were able to draw power from other states that were prepared for the cold weather; Texas (specifically the ERCOT grid) couldn’t do that.

I continue to believe that having their own grid – mainly in order to free a small number of power producers from having to comply with some federal regulations – is a luxury that Texans can’t afford anymore.

After all, in February there were hundreds of deaths (the highest estimate I saw was around 800) and billions in direct costs, plus the billions in charges that are in dispute and likely to be paid in large part by – of course – the Texas ratepayers and taxpayers. On top of that, there will be some long-term discouragement of new investment in Texas, caused by the cloud of uncertainty that will hang over power rates for probably the next 5-10 years. All of this so a few owners of power plants could save some money by not having to comply with some regulations. Is this a great tradeoff?

There’s another reason why I think the ERCOT grid should be connected to one of the other Interconnects: I’ve been advocating that there needs to be partial federal funding for grid cybersecurity investments (and there’s clearly need for federal dollars for non-cyber grid security investments as well. There was some in the pandemic recovery act, and there will likely be a lot more in the infrastructure act).

The reason why this is a legitimate federal expenditure is that investment in an interconnected grid benefits the whole country, since the resources in one area back up those of another area that is experiencing a temporary problem (of course, this is why interstate highway upgrades get federal funding, even though some of them – especially “spurs” going to a particular destination off the highway – primarily benefit local residents).

But guess what? Investment in the ERCOT grid almost entirely benefits citizens of Texas (and not all of them, since areas including El Paso and northeastern Texas aren’t part of ERCOT), because grid resources in ERCOT aren’t immediately available to relieve a shortage in another area like the Southeast US or Oklahoma (of course, by “immediately” I mean within a second or so. There are DC ties between ERCOT and other grids, but they have to be activated manually, which is too long to avert a cascading failure. In the 2003 Northeast blackout, it took less than six minutes for the disturbance to change from a local problem in northern Ohio to a complete shutdown of the grid in much of the US Northeast and Upper Midwest, as well as almost all of Ontario).

So I would advocate that further federal investment in the ERCOT grid – at least for reliability purposes – should be made contingent on ERCOT joining either the Eastern or Western Interconnects (but not both. That would cause a big control problem, since the US simply isn’t set up now to be a single national grid).

Myth: This would not have happened if Texas and California had “capacity markets.” Commissioner Clark argues that capacity markets wouldn’t alone have prevented the February outages, which I agree with. On the other hand, I don’t think there’s much question that capacity markets are an important component of a long-term solution to the problems that caused the outages, and I doubt he would disagree with that.

Myth: It’s renewables' fault. I completely agree with what he says on this topic. It was ridiculous for Gov. Abbott to immediately blame renewables after the February event. Some wind turbines froze up during the outage, but it wasn’t even half of them. And if every single wind turbine in Texas had been disabled, that alone could never have led to the outage, since wind energy only constitutes about 5% of the Texas power supply (although that’s a huge percentage, compared to most other states).

Myth: This is all just about freak cold weather. I totally agree with what he says here.

And I love his closing comment: “The analyses are just emerging when it comes to the tragedy of recent weeks, but one thing is certain: Resilience is crucial. This is why the regulatory tools that have worked well in the past are best positioned to meet the challenges of the future.”

My comment on this comment? Amen.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, June 16, 2021

What is “critical software”?


From my point of view (i.e. the only completely unbiased point of view I know of – and I can judge that point since I’m unbiased), the biggest question mark about the May 12 Executive Order is “What is critical software”? This is important, because the requirements for software suppliers in section (e) of Section 4 apply to critical software (although the EO is somewhat vague on whether or not only critical software is in scope).

The EO orders NIST to develop a definition of that term (paragraph (g) of Section 4, pages 15-16), but then very helpfully goes on to say, “...definition shall reflect the level of privilege or access required to function, integration and dependencies with other software, direct access to networking and computing resources, performance of a function critical to trust, and potential for harm if compromised.” And not only that, but the EO had already defined critical software (p. 12 section (a)) as “software that performs functions critical to trust (such as affording or requiring elevated system privileges or direct access to networking and computing resources).”

Poor NIST. The White House is saying “We give you complete freedom to define critical software, as long as you use one of these two definitions.” The two definitions more or less say the same thing: “Critical software is software whose exploit by a bad guy could cause a lot of bad things to happen, due to the nature of the software and how it’s installed and the privileged access it receives.” In other words, “We want to prevent another SolarWinds from happening, so we’re going to regulate the hell out of anything that looks or smells like SolarWinds.”

I can’t particularly blame the WH for taking that attitude: After all, it’s a military tradition to be ready to fight the last war, not the one you face now. But it did occur to me that this isn’t the way we usually think of critical assets (hardware and software), especially in the electric power industry. For example I would think of all of the following as critical software, even though I doubt any of it runs at high privilege levels:

·        The software that runs the SWIFT system for international money transfers

·        The software that controls the operation of a factory

·        The software that runs a nuclear power plant

·        The Energy Management System (EMS) software that balances power supply and load (demand) with microsecond accuracy in a particular region like a major city, running in the control centers of electric utilities

·        The software that runs the NY subway system

In other words, I think the function of a piece of software can make it just as critical as the privilege level it runs at.

But there’s another thing that was left out of the EO definition, which I pointed out in the post I wrote the day after the EO came out: intelligent devices. These have become more and more important in our lives and work, and they perform lots of critical functions now. Surely some of these should be included as “critical software” – but I couldn’t see how you could stretch the definition of “software” to cover devices.

But in this, I overlooked the close-to-unlimited ability of government to stretch the meanings of words through regulation! The FDA performed a valuable service to me and the cybersecurity world by pointing out, in their response to NIST on the EO, that the Food, Drug and Cosmetic Act (FDCA) says that some software is “software that meets the definition of device…”

In other words, devices are software because an Act of Congress made them so! Problem solved (although I must admit to not quite understanding what this means. Essentially, the FDA is saying “Not all software is soft. Some software is hard and made out of metal, chips, wires, etc.” This seems to me like saying “Not all dogs bark and wag their tails. Some dogs mew and use the litter box.” It seems to me that, just as it would be easier to say “Some household pets are dogs and some are cats”, it would be easier to say “Some software runs on general purpose devices like Intel-standard servers, and other software runs on dedicated, sealed devices like infusion pumps in hospitals.” As opposed to saying the devices themselves are software. But then, what do I know?).

In any case, the FDA goes on to say that ‘software is “critical software” generally (i) where it meets the definition of device and (ii) where the software is necessary for the safe and effective use of a device.’ In other words, the FDA wants to discard the EO definition of “critical software”, and just have the term apply to devices (of course, mainly medical devices, since only those are defined as software in the FDCA) and the software in those devices.

I don’t disagree with the FDA that devices and the software they run should be called critical software, although I would prefer expanding the term to “critical software and devices”. In that term, I would include:

1.      Software running at elevated privilege levels, as in the EO’s definition

2.      Devices performing critical functions and the software included in them. This covers more than just medical devices. For example, the electronic relays found in just about every electric substation worldwide are crucial to the safe and reliable operation of the power grid.

3.      Software that performs critical functions using general-purpose hardware, such as the five items I mentioned above.

See? Now everybody can be happy: the White House, the FDA, me. What more could you ask for?

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Tuesday, June 15, 2021

Reminder: Webinar on “Cyber Security Perspectives on the Executive Order”

The Executive Order on “Improving our Nation’s Cybersecurity” seemed very important to me when it was issued on May 12. And now that I’ve had a month to discuss it with others, it seems…more important than ever. As luck (and a timely suggestion) would have it, I’ll be participating in a webinar on the EO next Tuesday. The speakers form a great not-the-usual-talking-heads group, all of whom will have very interesting perspectives on the issue of software supply chain cybersecurity - which IMHO is the number one source of supply chain cybersecurity threats nowadays, and perhaps the number two cybersecurity threat worldwide, after ransomware.

I can almost guarantee you’ve never heard from any of the panelists before (other than me, of course. I can’t help that), and that you probably haven’t even heard of some of the things they’re going to talk about. But that’s good – I can also almost guarantee that before last December, you had never even considered the idea that the SolarWinds software development process could be the locus of the most consequential cyberattack on the US in many years, if not ever. I certainly hadn’t, but here we are…

The speakers and their topics are:

·        Cole Kennedy, Director of Defense Initiatives, BoxBoat – on why software bills of materials are needed.

·       Jon Meadows, Managing Director, Citi – on verifying software. Jon’s title doesn’t tell you much about him, but from having talked with him, I can guarantee that he has a great understanding of supply chain security – especially the software supply chain – and how the different parts all fit together. He’s not to be missed.

·        Rob Slaughter, CEO, Defense Unicorns – on DoD’s Platform One Customer DevSecOps platform.

·       Andres Vega of VMWare. He’s in charge of product security for VMWare’s Tanzu platform and will discuss how VMWare is addressing the EO.

·       Me – on “critical software”, perhaps the most important term in the EO. This is quite controversial, and many different groups are weighing in on how this should be interpreted (the EO asks NIST to define it). Of course, they're doing this because of the big consequences of the decision on what will and won't fall within that definition. However, I’m happy to report that all parties agree on one thing: The EO’s own suggestion for how “critical software” should be defined misses the point. But they all have a different idea of what "the point" should be.

Signup is here. I hope to see you next Tuesday!

 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, June 10, 2021

Tim Roxey on the “end of days”


Note from Tom: I’ve moved my email feed from FeedBurner (who’s getting out of this business in July) to Follow.It. If you aren’t getting my posts by email anymore, just hit the Subscribe button in the top right. And if you’d like to start receiving these posts in your email inbox, also hit the Subscribe button.

The day after I put up my post describing why it’s literally impossible for a single cyberattack (or a single set of coordinated cyberattacks) to shut down the entire US grid, I was pleased to receive an email from Tim Roxey, former NERC VP and CSO, on the subject. As usual, he brought a really interesting perspective to the topic.

To briefly summarize my post, I said that

a)      You can divide the assets in the Bulk Electric System into three types: generation, distribution substations and control centers, and transmission substations and control centers.

b)     Generation and distribution are fairly easily dismissed as attack vectors, leaving transmission substations and control centers as the likeliest vectors.

c)      However, I showed that penetrating the control systems in transmission substations and control centers would be extremely hard (and has never been accomplished in North America), even in the case of a single asset, due in part to the rigorous controls required by the NERC CIP standards.

d)     But attacking a single asset won’t get you very far if your goal is to bring down the entire grid. I estimated that you’d have to carry out a very well-coordinated attack on at least 40 transmission assets (10 in ERCOT, and 15 in both the Eastern and Western Interconnects. And if you want to include Quebec in your continent-wide blackout, then you have to add at least 10 assets there, since Quebec has its own grid – plus I know it’s stretching the truth a lot to say that Alberta is connected to the Western Interconnect. I believe it’s just in recent years that there’s been any connection at all, and even now I think it’s just one line. You’d probably have to attack at least 10 assets in Alberta as well, for a total of at least 60 all told), and even that is probably a woeful underestimate.

e)     I said this would be simply impossible. My reasoning – which I should have stated – was that OT networks are incredibly diverse in the power industry. The devices on the networks are quite variable, as are the configuration and technologies behind the networks.

Of course, other industries might consider it very inefficient to have so much diversity, since it means that suppliers can’t realize the huge economies of scale that for example Dell, HP and Cisco have realized on the IT side. There’s no doubt this is true, but at the same time it makes it literally impossible for the grid to be the subject of a massive, coordinated attack.

This diversity wasn’t planned, of course. It just happened because decision-making is so decentralized in the power industry. I’ve always said that planning is great, but in the end there’s no substitute for dumb luck! The industry - and North American power users - have benefited greatly from that dumb luck.

Tim wrote in to say he agreed with my general argument, but (and here I’m paraphrasing him) I’d overlooked another type of assets: IT assets. In fact, the only thing that generation, distribution and transmission operations have in common is that they all rely on IT assets, not just OT ones. A coordinated attack on IT assets throughout the industry could conceivably be the vector for a takedown of the entire North American power grid.

Tim’s point was that, since it would be normal to expect IT networks to be fairly homogeneous, that means those networks – and the devices attached to them - might well be the vector that would enable an attack to occur. However, once again the power industry has saved itself because of diversity. This time, it’s not diversity in the technologies involved in the IT networks – they literally all run IP, I’m sure, on Intel-standard devices. There’s no DECNet or Novell IPX anymore, although I can remember when these were present in lots of IT networks. And the machines on the networks almost all run Windows, with some Linux. No MS-DOS, MacOS, VMS, etc.

So where does the diversity come from? It’s in the network architecture. Electric utilities have realized the benefits of network segmentation, firewalling off different areas of the network, different WAN technologies, etc. None of this is great for pure efficiency, but it’s great for preventing a small number of hackers from carrying out a massive, simultaneous attack on lots of different grid assets in every Interconnect. And it would take such an attack to bring down the US (or North American) grid in its entirety.

Here is what Tim wrote. Lots of wisdom in here!

Tom Yes – Scalability is directly related to variability in the environment. Very little variation – broader span. Larger variability, then more effort for each unique piece of variability. 

 

If an environment is very homogeneous, then a successful exploit at one interface is likely useful at a second or third interface.

 

1.      Homogenous is bad.

a)      Network architecture using the same make and model for all switches, hubs, routers, servers, etc.

b)     Desktop environment consistent across the enterprise.

c)      Lack of principle of least privilege. 

d)     Lack of application White Listing.

 

If an environment is Heterogeneous, then a successful exploit at one interface does not necessarily mean it will work at a second or third interface.  

 

2.      Heterogeneity is good. 

a)      Network architecture mixed with different vendors supplying parts of the environment. 

b)     A desktop environment consisting of different Operating Systems. 

c)      Full implementation of the principle of least privilege 

d)     Full implementation of application whitelisting

 

In number 1, the Adversary only needs to understand one (or a few) different types of network technology. Perhaps the same firewalls are used everywhere for segmentation. In this case, the same exploit used for layer 1 is useful for layers two and layer 3.

 

If the victim changes firewalls at every boundary level, then the Adversary must deal with a different set of exploits for each of the different levels. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Tuesday, June 8, 2021

No, a cyberattack isn’t going to shut down the power grid


Note from Tom: I’ve moved my email feed from FeedBurner (who’s getting out of this business in July) to Follow.It. If you aren’t getting my posts anymore, just hit the Subscribe button in the top right. And if you’d like to start receiving these posts in your email inbox, also hit the Subscribe button.

On Monday, CNN published a story that led off with this:

Energy Secretary Jennifer Granholm on Sunday warned in stark terms that the US power grid is vulnerable to attacks.

Asked By CNN's Jake Tapper on "State of the Union" whether the nation's adversaries have the capability of shutting it down, Granholm said: "Yeah, they do."

"There are thousands of attacks on all aspects of the energy sector and the private sector generally," she said, adding, "It's happening all the time. This is why the private sector and the public sector have to work together."

I’m sure Secretary Granholm meant well when she said that – not wanting to lull people into thinking the problem of grid security was solved, trying to prime the pump for more cybersecurity spending, etc. But the fact is that adversaries don’t have the capability to shut down the “US grid” with a cyberattack – or even multiple simultaneous attacks.

Period.

I don’t think anyone at all familiar with how the electric power industry works in the US will be surprised by this statement. But a lot of other people really do think this is possible, motivated as far as I know by movie plots. You might sell a lot of tickets if you show the whole US grid collapsing, but you have to classify the movie as fantasy, because that’s what it is. Here are some of the major reasons why I say a cyberattack that would take out the whole or even a large portion of the US grid - hell, even just 3 or 4 states - is about as probable as the discovery of Bigfoot in a Wall Street bank:

·        The US participates in three completely disconnected AC grids: The Eastern and Western Interconnects and ERCOT, which covers a large portion of Texas (Quebec also has its own grid).

·        To bring down the US grid, you would have to launch devastating attacks on all three Interconnects.

·        There’s no single point – or even 4 or 5 points – that you could attack to bring down a whole Interconnect. So in each Interconnect, you would have to launch devastating attacks on a number of assets at exactly the same time. And they would all have to be the same type of asset: generating plants, distribution substations and control centers, or transmission substations and control centers.

·        Forget about causing a cascading outage by attacking generating plants. See my quote at the end of this 2018 E&E News article and the post I wrote on the subject shortly afterwards.

·        And forget distribution substations and control centers.

·        This leaves transmission substations and control centers. In theory, if you were to penetrate enough control centers and substations in each Interconnect, you might cause a widespread cascading outage. How many? I’d guess at least ten per Interconnect, but it’s probably more than that (certainly in the Eastern and Western Interconnects, perhaps not in ERCOT).

·        But you really can’t attack transmission substations. Their control systems are virtually never connected to the internet. They’re always connected to a control center, though, and control centers are almost all connected to the internet.

·        So how do you get into a control center? Download a script from the dark web, type in an IP address (handily displayed on a utility’s web site, since as we all know utilities are quite happy to give you all the information you’d possibly need to attack them 😊), and hit Go (or whatever the button is called. I haven’t launched any devastating grid attacks lately, so I can’t remember what the button says)?

·        I regret to say it’s a lot harder than that. In fact, the sharpest attackers are constantly pounding on transmission and distribution control centers, and there’s never been a successful cyberattack on a single one in North America (as well as much of the rest of the world). In part because control centers have been protected by really tough cyber regulations for almost 20 years (by NERC CIP since 2009, and by NERC Urgent Action 1200 and 1300 before then) and also because everyone understands that they’re really crucial, you ain’t going to get in, period. And you certainly aren’t going to get into ten of them (per Interconnect).

Of course, causing a purely local outage (e.g. the area served by a single line or substation) is much more possible through a cyber attack – but again, it’s never happened in North America, and is very unlikely to. However, local outages happen all the time. Storms and squirrels are by far the biggest causes of those.

But this isn’t to say a total US grid collapse is inconceivable. An EMP event could conceivably do it. Or a solar storm – perhaps the size of the Carrington Event, which hit the US in 1859, before there was any electric infrastructure besides telegraph wires. Either of these would be devastating. In fact, a US government commission in 2008 said that, in the event of a total grid collapse caused by an EMP which caused an outage lasting a year, 66-90% of the US population would not just be badly inconvenienced. They would die.

So if you want to worry about a devastating grid attack, worry about EMP. And ask Vladimir Putin, Kim Jong-un and Xi Jinping (and maybe Ayatollah Khamenei in a few years) to please not cause one.

Note 6/13: Tim Roxey made excellent comments - and then some - on this post. You can find them here.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, June 7, 2021

A presentation you shouldn't miss

Note from Tom: I’ve moved my email feed from FeedBurner (who’s getting out of this business in July) to Follow.It. If you aren’t getting my posts anymore, just hit the Subscribe button in the top right. And if you’d like to start receiving these posts in your email inbox, also hit the Subscribe button.

I just received an invitation from an organization whose meetings I’ve only attended once but found them quite good. This is the Software and Supply Chain Assurance forum, a group that includes a lot of government cybersecurity people. They deal with what IMHO are the two biggest problems in cybersecurity today: software security and supply chain cybersecurity. Moreover, they’ve been doing this since 2010 (long before I even thought about supply chain security, to be honest). I wrote about them in this post in 2018.

This invitation is to a virtual meeting on June 16th. I was especially interested in this meeting because Cheri Caddy will be speaking. She is Senior Advisor, Cybersecurity, in the Office of Cybersecurity, Energy Security and Emergency Response (CESER), of the Department of Energy. I have gotten to know her (I won’t say well yet, but I hope to be able to later) because she played a big role in getting our Energy SBOM Proof of Concept off the ground. Moreover, she provided the resources of Idaho National Labs – including my co-leader, Virginia Wright -  to make the PoC successful (and with over 30 electric utilities and other power industry players, five major industry organizations, and over ten software and device suppliers to the power industry represented – along with a number of service and tool providers – I can safely say that the PoC is well on the road to being successful, although it won’t be a short or easy road).

Cheri will “describe DOE’s programs for working with operational technology manufacturers and energy sector asset owners to discover, mitigate, and engineer out cyber vulnerabilities in digital components in Energy Sector critical supply chains.” I’m looking forward to this, and recommend you try to attend as well. DoE is doing some pretty amazing things, especially in supply chain security.

 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Saturday, June 5, 2021

Software is developed everywhere. It's developed by everybody.


Many people – including the ones who wrote the recent Executive Order – think that an important component of software supply chain risk is due to provenance, meaning where the software came from. Of course, the idea behind that belief is certainly understandable: there has been a lot of concern about the idea of malicious actors in Russia, China, North Korea, Iran, Cuba, Venezuela – you name it – planting a backdoor or logic bomb in software used by government or private organizations in the US, causing havoc on the scale of the SolarWinds attacks.

This is why many people think it is important that critical infrastructure and government organizations in the US should inventory all of the software they use, as well as all of the components of the software they use (of course, you need SBOMs to do that!), and identify which of these originate in Bad Guy countries – or which might perhaps have been developed by an organization that is controlled by or under the influence of a Bad Guy country. Then they should at least consider removing any software they identify from their environment, or at least they should make sure they don’t buy any more of this Bad Guy software.

There’s only one problem with this way of thinking: with one exception, I know of no instance in which a Bad Guy nation-state has actually carried out a software supply chain attack on US interests, that could have been short-circuited by conducting such a provenance analysis (that one exception is of course Kaspersky, which at the time of the attack was located in Russia and whose founder had links to the Russian security services. They were alleged to be behind the attack on an NSA employee, who had stolen some of the NSA’s most potent malware - including the malware behind NotPetya - and stored it on his home computer, which ran Kaspersky’s antivirus software. Kaspersky swore up and down that this wasn’t the case, but I’m certainly willing to stipulate that the Russians had planted a backdoor in the Kaspersky software, which let them penetrate the NSA employee's computer and exfiltrate the NSA's malware weapons).

Of course, I’m sure there have been plenty of cases where a software company in a Bad Guy country has sold software that contains vulnerabilities to US customers. But every software company everywhere does that – there’s simply no way to ship vulnerability-free software. In fact, given that I’m sure most of the software used in the US is developed by American companies, it is quite likely that the biggest source of vulnerability-laden software to American organizations is…American software companies.

What about SolarWinds, you ask? They certainly shipped vulnerability-laden software to many companies and government organizations (18,000, in case you’re keeping score at home), in the US and abroad. And those vulnerabilities were planted by a Russian team of about 1,000 people (using Microsoft’s estimate) that deployed what may be the most sophisticated piece of malware ever. Surely this counts as a state-sponsored software supply chain attack that could have been prevented by provenance controls!

The problem with that narrative is that, the last time I checked, SolarWinds is a US company. Aha! But what about their suppliers? They used contract developers in Eastern Europe, for God’s sake. Surely they had a hand in this nefarious deed? That’s a good question, and one that I raised in one of my many posts after the SolarWinds attacks were discovered.

But as the article linked in the previous paragraph describes, those developers didn’t have their hands anywhere near this attack. It was carried out by servers located in the US that were controlled by the Russians, and the attack was on the SolarWinds Orion build environment, which was physically located in the US. The malware-laden Orion updates all were digitally signed by SolarWinds. How could provenance controls have prevented this attack?

There was another important Bad Guy connection that I read about in the New York Times in early January: SolarWinds was a big user – as are many other large US companies – of a development tool called JetBrains, that is developed and sold by a company headquartered in – get this – Moscow. In this post, I wondered (as the Times had) whether JetBrains might have been the vehicle for the attack on the SolarWinds build environment, although I stopped short of saying that was likely.

However, 12 days later – without any additional evidence – I said in another post “Of course, it was recently learned that the Russians did penetrate a very widely-used development tool called JetBrains. And one of JetBrains’ customers was in fact SolarWinds.” Three days later I received an email from a very polite public relations person for JetBrains in Moscow, asking me to please retract that statement, since JetBrains had just recently put out their own statement firmly denying any role in the attack. Of course I did that and apologized to JetBrains in a new post that day.

In that post, I pointed to a lack of care as the reason for my misstatement. However, four days – and more introspection – later, I confessed to the real reason: I was prejudiced against Russian companies because dontcha know they must all be captives of the Russian state, just like I believed Kaspersky was. And if I’d been in charge of those things, I might have banned all software sold by Russian companies from installation on important networks – which just goes to show that you shouldn’t put me in charge of those things.

The fact is that, if we start banning software from particular countries just because we think the governments of those countries are out to damage the US (and there isn’t a lot of doubt in my mind that the Russian government fits that description), we’ll end up damaging our own companies. JetBrains didn’t get its huge worldwide market share because it’s a so-so product; by all accounts (and I’m certainly not competent to judge this), it’s a very strategic tool for developers like Oracle and Microsoft (as well as SolarWinds, to be sure). Were we to prohibit US companies from buying JetBrains, we would be putting them at a competitive disadvantage vs. companies headquartered in other countries.

But most importantly, the whole idea that software “comes from” a particular country is now obsolete. Nowadays, software – other than perhaps software developed for DoD and the 3-letter agencies - is developed by teams of people from all over the world who collaborate online to develop the product. Sure, they might mostly (or even all) be employees of a company located in Country X, but a large percentage of them will almost undoubtedly not be citizens of X and probably not be located there, either.

Matt Wyckhouse of Finite State, in a presentation last fall, pointed out that Siemens – a huge German company that does a lot of business in the US, especially with critical infrastructure organizations – has 21 R&D hubs in China and employs about 5,000 Chinese staff there. Does this mean that Siemens software poses enough risk that you should consider removing it (along with whatever Siemens hardware it supports, of course) from your environment? After all, there’s a good chance that at least a portion of any Siemens software that you buy was developed in China.

If you’re DoD, the answer might be yes – i.e. DoD might decide (or have decided) not to accept that risk, however small it is – and to pay the undoubtedly high cost of finding and installing equally functional alternatives to whatever Siemens software they’re not buying. For almost any other company, the answer IMHO should be no. There’s simply no justification for subjecting your organization to the time and expense required to find an alternative to Siemens, given the very low provenance risk posed by their software.

So we all need to stop thinking of software supply chain risk as being somehow tied to where the software was developed, or the nationality of the people who developed it. If you’ll promise not to do that anymore, I’ll do the same. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, June 2, 2021

Upcoming webinar on the Executive Order


Note from Tom: FeedBurner, which has been sending out the emails with my blog posts since I started the blog in 2013, will stop doing this in July. I have engaged Follow.It to take over that service (as well as provide others that you can take advantage of, as you can see on their website) immediately. If you are currently receiving the FeedBurner emails, your address has been transferred to Follow It. And if you’re not receiving emails now, you can remedy that problem by signing up using the link in the top right of this post.

I’m going to leave the FeedBurner feed active for about a week, so you’ll get both feeds during that time. If you receive the FeedBurner feed for this post but you didn’t also receive the Follow.It feed, please drop me an email and I’ll add you to the latter. In theory, this shouldn’t happen, but one never knows…

 

I will be participating in a webinar on June 22, sponsored by Boxboat Technologies. The subject is the new Executive Order. The list of speakers is – well – quite diverse:

- Cole Kennedy, Director of Defense Initiatives, BoxBoat
- Tom Alrich - Independent consultant and volunteer co-leader, NTIA SBOM Energy Proof of Concept
- Andres Vega - Product Security - Tanzu, VMWare
- Rob Slaughter - CEO, Defense Unicorns
- Jon Meadows - Managing Director, Citi

If you’re wondering what Citi, VMWare, BoxBoat Technologies, Tom Alrich LLC and Defense Unicorns have in common, I’ll be honest: very little. And that should make for a very interesting discussion! Here is the signup link. I’ll hope to see you there!

 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.