Friday, July 30, 2021

Allan is moving!

Dr. Allan Friedman, who has been running the Software Component Transparency (SBOM) Initiative of the National Technology and Information Administration (NTIA) of the US Department of Commerce since its inception in 2018, announced recently that, as of later in August, he will move from NTIA to CISA (the Cybersecurity and Infrastructure Security Agency) in DHS.

When Allan announced this (and I’m sure he’s done it at least ten times in different meetings of the Initiative – including our Energy Proof of Concept meeting on Wednesday of this week), he has always immediately followed that by saying that he will still be completely involved in the work of the Initiative. But the Initiative will change, simply because CISA is a very different organization from NTIA.

Of course, it’s too early to know how it will change. Allan has promised (I believe him, too) that the whole group involved in the Initiative (there must be at least 200 people who attend at least one of the meetings in any given month, including from Europe and Japan) will meet with him in September to decide the way forward. This isn’t to say it will be a democratic process, but at least people will have their input.

Alan has pointed out many times over the past two weeks that the Initiative started from just about zero in 2018, and now has built up a substantial body of experience, knowledge and especially written guidance about SBOMs. This couldn’t have happened without the NTIA’s approach to launching a new technology (as they did with DNS in the 1980’s and 1990’s, and as they’re now doing with 5G).

To launch a new technology, NTIA doesn’t gather a bunch of wise people in a room (virtual or otherwise), who scratch their chins, offer profound thoughts, develop a very thoroughly-researched document describing in great detail all of the ins-and-outs and do’s-and-don’ts of the new technology, then go home and congratulate themselves on a job well done - whether or not anybody’s even looking at what they’ve written.

Rather, the NTIA gets the actual stakeholders together to figure out what’s needed for the new technology to succeed, and what’s the best way to get there; there are no preconditions, and all meetings and documents are completely public. In the case of SBOMs, a key tool is the industry-focused Proofs of Concept, of which there are currently three (healthcare, autos and energy). It’s possible the three PoCs will remain under NTIA’s auspices, simply because they’re working well and there’s no reason to mess with a good thing (the energy PoC is especially fortunate, since Idaho National Labs is providing support in many ways, including the web site and Ginger Wright, my very able co-leader in the effort). Of course, Allan will be able to participate in the meetings, no matter what agency they’re “under”.

So if everything was going so well, why is Allan making this switch? I believe (without having discussed it with him yet) that he looked at the number of cybersecurity professionals inside NTIA – a small number, certainly – vs. the number inside CISA (CISA had about 3400 employees last year, and I’m sure that number has already jumped a lot, especially as they keep getting more jobs added to their portfolio). And he saw that both he and the SBOM “movement” (cult?) can expand in all sorts of ways if they’re part of CISA, that they couldn’t even dream of under NTIA. There are some really huge possibilities, and Allan has just begun to explore them.

Good luck in the new gig, Allan! A new world is opening up for you and for SBOMs.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, July 26, 2021

Two wise men weigh in on Colonial’s billing system


My post on the billing system at Colonial Pipeline brought out great comments from two wise men of the power industry cybersecurity world: Kevin Perry and Tim Roxey. As you’ll see, they didn’t say the same thing at all, but they didn’t contradict each other, either. Rather, Tim’s comments built on Kevin’s.

Here’s a quick summary of my previous post, although I hope you’ll read it if you haven’t yet:

·        Even though the ransomware attack never reached Colonial’s OT network, it did bring down their billing system.

·        And even though it might seem odd that the loss of the billing system could bring down pipeline operations, there were actually good reasons for why that happened (which I’ll let you read).

·        I concluded by pointing out that “Tom’s First Law of OT Networks says that an ‘operations-focused’ company – as opposed to an information-focused company like an insurance company or a consulting firm – will be forced to bring their OT network down if their IT network falls victim to a ransomware attack.”

I stand by what I said, but Kevin’s and Tim’s email comments made me realize that I hadn’t asked the more interesting questions:

1.      How can we identify systems that don’t directly control operations, yet can have a huge impact on operations just the same (i.e, IT systems that perform functions required for operations)? And when we’ve identified them, what measures can we take to protect them better than other systems on the IT network that clearly have no direct operational impact, like say the systems that run the utility’s charitable operations?

2.      Should those systems be regulated by OT-focused cybersecurity compliance regimes, such as the dreaded…(and here I cross myself, despite not being Catholic)…NERC CIP?

3.      Or maybe we need to go beyond all this talk about regulation and protecting systems, and think about what the real problem might be?

Briefly, Kevin addressed questions 1 and 2; Tim took question 3 (not that I even thought of these questions until now, of course). I’ll start with what Kevin said, and cover what Tim said in my next post.

On Thursday, Kevin wrote this to me:

I would argue that any “IT” system, or system component that is essential to keeping to OT operational needs to be considered OT and kept isolated from the rest of the IT world.  As you noted, electric metering, whether at the customer point of delivery or in a tie substation, is OT.  The data from the meters are fed into the IT billing systems.  If the billing systems are down, bills will be delayed, but the meter data collection will continue until it can be transferred to the billing systems.  It is inexcusable that the OT must be shut down because an essential IT system is down.

Here are the points that I infer Kevin is making:

1.      This problem wouldn’t have happened in the electric power industry, since an electric utility's operations (including metering) can continue, even when the bills can’t be generated (no pun intended).

2.      The billing system is “essential to operations” in the pipeline industry (or at least in Colonial’s case), although not in the electric power industry (meaning it isn’t a BES Cyber System, or BCS).

3.      If there were a cyber regulatory regime like NERC CIP in place in the pipeline industry, the billing system would need to be considered the equivalent of a BCS.

4.      Regulation or no, the pipeline industry should protect their billing systems using at least some of the same measures (including isolation) used to protect OT systems.

I responded to Kevin’s email with the question, “If you think certain IT systems should be isolated, would you favor an expansion of the CIP standards to require network isolation, as well as perhaps some (although not necessarily all) of the other CIP requirements?”

I want to make one point here: CIP already covers a large group of systems that many electric utilities consider to be part of IT, not OT. Those are systems located in Control Centers. While these systems certainly perform an OT (and in many cases BES) function, they aren’t Industrial Control Systems, since they’re implemented on standard Intel-based hardware and run standard IT operating systems: Windows™ and Linux. A lot of the management that needs to be done on them is the same as what needs to be done for say financial systems.

And interestingly enough, Control Centers aren’t included in NERC’s 80-page “definition of the BES”. That definition requires an asset to be connected to the grid at 100kV or higher. The only reason systems in Control Centers are even included in CIP is because Control Centers are specifically called out in CIP-002 R1.1. So it wouldn’t be unprecedented if other “IT systems” were in scope for CIP, although CIP-002 would have to be amended for that to happen.

Kevin (a member of the NERC teams that drafted Urgent Action 1200, the CIP predecessor, as well as CIP versions 1 and 2, and who was then Chief CIP Auditor for the SPP Regional Entity for about ten years, until his retirement in 2018) replied to my email by saying:

A proper CIP-002 assessment of all Cyber Assets linked to the proper functioning of the readily identifiable OT should be sufficient.  In the early days, some entities tried to move systems out of scope simply by moving them out of the ESP (Electronic Security Perimeter).  My team always took a hard look at the historians that were outside the ESP and also their map board display systems.  Most entities simply used their historians for temporal data storage and non-real time engineering analysis, and keeping them out of scope was OK.  

But I am also aware of at least one entity that used their historian to drive their map board displays and also used the historian data for real-time decision making.  Their historians were Critical Cyber Assets (now BCS) because they were used for real-time operations.  At least one entity had map board displays that were not readily available on the dispatcher console, thus the map board also became a CCA/BCS.  And my team did not stop with systems used for the entity’s real-time operations.  An entity who declared their ICCP servers out of scope because they were not using the outbound data (destined for their RC or another BA or TOP) themselves found their decision frowned upon.  Even though they might not be receiving real-time data from a remote association, they were supplying real-time data essential to the recipient(s).  When they argued to the contrary, my team referred them to the TOP and IRO standards that compelled them to send what was initially known as “Appendix 4B” data.

 

So, apply the same logic to the billing system and you will see the meter data collection subsystem is absolutely a BCS if its failure causes you to shut down your OT (SCADA/EMS) systems.  The part of the billing system that sends the invoices and payments is not.  Processing invoices and payments can wait until you get that system back up.

Here is what I take away from what Kevin says that he doesn’t favor expanding the CIP requirements to include systems located on the IT network because, if a system on the IT network meets the definition of BES Cyber System (which the different examples he used all do, even though the entities that operate them hadn’t classified them as such), it must be treated as a BCS, including being located within the ESP (i.e. the OT network). Of course, this only applies at Medium and High impact BES assets. Low impact assets aren’t required to have ESPs.

So a system like the pipeline billing system – if it existed in the electric power world – would need to be treated as a BES Cyber System, subject to all the privileges (?) attendant on that august designation.

I then asked Kevin whether he thinks utilities should designate their meter data collection systems as BCS. His answer was nuanced, yet at the same time quite clear:

Inconsistent.  The meter data loss does not impact reliability within 15 minutes (Tom’s note: The definition of BES Cyber Asset/BES Cyber System requires that the loss or misuse of the system would have an impact on the Bulk Electric System within 15 minutes. If it has an impact but it will usually take longer than that to happen, it’s not a BCS).  But it also does not cause the utility to shut down the grid.  Loss of telemetry does not stop the revenue-quality meter from collecting data.  Loss of the meter itself does not stop the flow of electricity.  There are procedures for dealing with an occasional failure, including redundancy and inter-utility meter data reconciliation.

If the meter is only a revenue meter, then it does not need to be a BCS.  If the meter also reports real-time flows and/or voltage, then it is a BCS.  That is what I meant by inconsistent.

So Kevin is saying that, given the current NERC CIP requirements, there are only two choices: The meter data collection system is a BCS or it’s not. If it’s a BCS, it doesn’t get any break from any other BCS, in terms of the number or types of requirements that apply to it. If it’s not a BCS, it’s completely out of scope for CIP.

But there are certainly cases where a lack of good security on the IT network can result in an outage of the OT network. I described a dramatic example of that in this post, where a ransomware attack that shut down the IT network but didn’t touch the OT network (as in the case of Colonial), in the end resulted in two large Control Centers being completely shut down for up to 24 hours, with the grid in a multistate area being run by cell phone.

It’s safe to say that none of the systems on the IT network of this utility met the definition of BCS, so there was no single system that led to the Control Centers being brought down – yet they were brought down anyway. This seems to me to point to the need for CIP to be extended in some way to cover IT assets – perhaps as some sort of “halfway house” asset. But there’s no way that the current CIP standards should be extended to cover anything else. They first need to be completely rewritten as risk-based. Then we can look at extending them to IT, based on the relative risk levels of OT vs. IT.

I’ll turn to Tim Roxey’s comments in my next post. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, July 21, 2021

How could a billing system attack shut down an OT network?


Yesterday, I attended an excellent webinar on a topic I’ve been waiting to have someone explain to me, “Consequence Driven Cyber Informed Engineering (CCE) – Resilience Strategies”. It was sponsored by Midwest Reliability Organization (MRO), and featured two longtime friends of mine: Jodi Jensen of WAPA and Sam Chanoski of INL. Since a recording will be available on MRO’s website soon, I won’t try to reiterate what was said in the webinar, other than saying it’s worth your while to listen to the recording.

What inspired me to write this post was Jodi’s statement, regarding the Colonial Pipeline ransomware attack, that Colonial had said that they shut down the actual pipeline (i.e. their OT network) because of the loss of their billing system (which was on the IT network). Of course, the IT network was compromised, so it had to be shut down and the machines rebuilt.

Colonial insisted that their OT network wasn’t affected by the ransomware, but they had to shut it down anyway due to the loss of their billing system. Jodi wondered why the billing system was essential to operations. In other words, couldn’t they have continued shipping petroleum products through the pipeline and worried about billing later?

I wrote three posts after the Colonial incident: Here, here, and here (in that order). In all three of them, I discussed possible reasons why the OT network (and pipeline) had to be shut down, even though the ransomware didn’t penetrate it. I also linked to a post I wrote last October, describing an incident in 2018 in which a major utility – a BA for a multi-state area – had to shut down their Control Centers (i.e. an important part of their OT network) for up to 24 hours and run the grid from cell phones, when their IT network was hit by a ransomware attack that required rebuilding 12,000 computers from scratch.

Just like in the case of Colonial, the utility swore the ransomware never penetrated their OT network (and I have no reason not to believe them), but they couldn’t take the chance that just one machine in the Control Center had been compromised. If that had happened, that one machine might have then compromised all of the IT network when it was restarted, requiring another huge shutdown and rebuild (and I’m told that this becomes much less fun the second time around, to say nothing of the third or fourth time). Which is why they shut down and rebuilt all the systems in the Control Centers as well.

I brought up that incident because this might have been another reason why Colonial shut down their pipeline. And after I wrote the second post, one of the most prolific commenters on my posts, a person named Unknown, wrote in once again to say

Like you, I also believe that Colonial shut down because they could not accurately bill customers or track their customers' assets (i.e. refined petroleum products).

Pipelines are like banks and oil in the pipeline is like cash in the bank. If a bank loses its ability to track who gave them cash (or who they loaned it to), then there is no point opening the doors, even if they can safely store the money in the vault.

Unknown wrote this because I had pointed out in the post that the Washington Post had said in an editorial (which I paraphrase), “If they had kept their pipelines operating while the IT network was down, they wouldn’t have been able to invoice their customers.” I added, “And it’s safe to say that Colonial doesn’t feel that it should deliver gasoline through their pipeline solely as a charity.”

Unknown was pointing out that it was more than the wish to avoid operating as a charity that motivated Colonial to shut down. They don’t own the gasoline they ship in their pipeline, any more than a trucking company owns the furniture they ship or a bank owns the money in its vaults. If either one loses track of what’s been entrusted to them, the trucking company or bank has to repay the entire amount (and certainly with consequential damages) to whoever shipped the furniture or deposited the money.

In other words, this isn’t like an electric distribution utility, which – at least for a brief period of time – owns the electric power they’re distributing to their customers (I’ll omit discussion of Retail Choice here). That utility has to keep the lights on, no matter what it costs them, and if they can’t bill during an emergency, they can usually bill later (the meters needed for billing are all on the OT network, so presumably an IT network shutdown wouldn’t affect them anyway). Colonial isn’t obligated to keep the cars in Georgia full of gas (nor are they paid to do that, of course). They obviously can’t keep shipping gasoline if it’s likely they’ll end up having to pay the full cost of the gas to the shippers.

I concluded my third post on Colonial by articulating the first law of nature that I’ve ever identified. Tom’s First Law of OT Networks says that “an “operations-focused” company – as opposed to an information-focused company like an insurance company or a consulting firm – will be forced to bring their OT network down if their IT network falls victim to a ransomware attack.”

I’ve been told that this can’t be considered as a new law of nature because there are already enough of those. How about Newton’s Laws of Motion? They’ve been around since the 1600s, and Einstein showed they’re not applicable in extreme conditions. Why not drop one of them, and put my law in its place? Seems sensible to me…

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, July 19, 2021

How can we incentivize Transmission cybersecurity investment?


Next Thursday July 29, I’ll be speaking on a panel, with Ben Miller of Dragos, which will be asking (and at least trying to answer) the above question, at the “Transmission Infrastructure Investment, US” virtual conference. The panel will be led by Jim Cunningham of Protect Our Power.

Our panel addresses just one of ten live sessions in the conference, all of which look quite interesting. You can get an agenda and sign up here. Our session will run from 2:40 to 3:30 ET.

I’ll hope to see you there. If you run into me in the hallway or at lunch, please say hello.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Friday, July 16, 2021

Video of Josh Corman’s Great SBOM Proof of Concept talk

I anticipated that Josh Corman’s talk at this week’s Energy SBOM Proof of Concept meeting would be good, and I certainly wasn’t disappointed – in fact, it was great. I’m going to describe it a little here, but I’m pleased to announce that the video is available – so you don’t have to take my word on any of this. Josh’s talk starts a little after the 12-minute mark and goes on for 22 minutes (his connection went down at one point, but he came back very quickly).

The meeting was devoted to discussing use cases for SBOMs. It occurred to us in planning the meeting that one of the best ways to address this topic was to hear from the two people most responsible for the movement to make software bills of materials more than just a nice concept, but a regular practice with well-understood (but not mandated) guidelines for production and use. These two people were Dr. Allan Friedman, leader of the National Technology and Information Administration’s Software Component Transparency Initiative, and Josh, who coined the term SBOM, although he always points out that he wasn't the first one who had the idea of inventorying software components. They both spoke at this week’s meeting on how they came to see SBOMs as an important need, and why.

Allan spoke first (and led the meeting, as he usually does). His talk was very good, and you should listen to it. However, Josh’s was exceptional. He covered two topics: The events that led him (and others) to believe that SBOMs were needed, and SBOM use cases. The latter was based on the NTIA document whose development he led in 2019, Roles and Benefits for SBOMs across the Supply Chain, which is one of the three or four fundamental documents produced by the Initiative.

Below are some very interesting statements he made in the “history” part of his talk. They’re certainly nowhere near everything he said (he managed to get in lots of words in a short amount of time, without rushing his words. Fortunately, in the video you'll be able to hear everything he says, if you’re not afraid to back up at a few points during his discussion), nor can I swear that I didn’t get a few things wrong.

1.      He remembers July 13, 2013 as the day that he woke up to the problem of software component vulnerabilities. On that day, servers running Apache Struts 2 – an open source component of many applications – were attacked through previously-unknown vulnerabilities.

2.      Josh’s reaction then was “It’s open season on open source. Who’s going to attack just one bank anymore, when they can attack lots of targets through one component?”

3.      At the time of that attack, Josh was in a high-level position at Akamai. However, he soon moved to Sonatype, an early leader in open source dependency (component) management – and now one of the leading software composition analysis tools.

4.      Probably the event that woke most of the rest of us out of our blissful ignorance of the problem of component vulnerabilities was the 2014 disclosure of the Heartbleed vulnerabilities in the OpenSSL cryptography library in 2014, which was estimated to be found in about half a million “secure” servers.

5.      Heartbleed – as far as I know – didn’t lead to any major breaches, but it required a huge effort by a huge number of organizations, just to find whether they had any vulnerable web servers - and if so, where. Why was that? OpenSSL is a component of other software, and often a component of other components, etc. Many organizations never even found all the instances of OpenSSL that they were running. For example, Josh says it took DHS six weeks to even answer the question of which federal agencies were affected by Heartbleed.

6.      Meanwhile, some financial companies knew in literally minutes or hours both whether and where they were affected. Why was this the case? Because they had kind of proto-SBOMs. Josh said the financial sector had woken up to this problem when he did – with the Apache Struts 2 attacks.

7.      After this, Josh decided to really dig into the idea of SBOMs and started reading Deming, who had stressed the importance of bills of materials for manufactured products. Having BOMs gave manufacturers the following advantages.

a.      They could have fewer, but better parts.

b.      They could compare quality of different suppliers and buy more from the high-quality ones.

c.      They could track which parts went in which products, so that if there were a problem with a part, it could be tracked down and replaced in any product in which it had been used.

8.      All of these benefits have direct analogues in SBOMs. 

      Another seminal event both for Josh and for awareness of component vulnerabilities, was the 2015 SamSam ransomware attack on Hollywood Presbyterian Hospital. This attack exploited a vulnerability in the JBoss Java development platform (now called WildFly). The hospital had to shut down patient care for about one week (I'm told that isn't a good thing for a hospital to have to do).

9.      The hospital knew about SamSam, but didn’t have any idea whether it was affected by the JBoss vulnerability and if so, where. Of course, this was because they had no SBOMs to provide them that information.

10.   It was this and the Wannacry attacks that caused the Food and Drug Administration, which regulates medical devices like pacemakers and infusion pumps, to put out a “Pre-market guidance” for those devices. While it didn’t require SBOMs immediately, it said they would be required in the future. This galvanized the medical community to start working on the problem of SBOMs and led to the creation of the NTIA Initiative.

But there’s a lot more. Watch the video!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, July 12, 2021

If you please, Sir, would you be kind enough to patch this serious vulnerability in that software you charged me a lot of money for?


Last week, the Wall Street Journal reported that Kaseya was warned in early April of the previously-unknown vulnerability used in the recent devastating ransomware attack on hundreds of organizations worldwide (including MSP customers of Kaseya and customers of those MSPs).

It had been previously reported that a Dutch security research organization had informed Kaseya of the vulnerability (along with others linked to it) some time before the attack. Now we know that time was three months ago. Kaseya patched some of the vulnerabilities in April and May, but unfortunately, they didn’t get around to this vulnerability (actually one in a chain of vulnerabilities) before the successful attack. Darn the luck! Moreover, Kaseya still hasn’t fully patched the vulnerability, because of some sort of technical issue.

At the same time, we’ve learned about the potentially devastating PrintNightmare vulnerability in the Windows print spooler. It’s a long story, but the gist is that in late June, some researchers mistakenly released a proof-of-concept exploit for the vulnerability. When the mistake became clear, they pulled the code back, but not before it had been copied and improved upon. Now the ambitious hacker has at least three sets of exploit code to choose from. So there is some good news in this story…for the hackers.

Of course, all this vulnerability does is allow attackers to take control of the Windows domain controller…nothing serious or anything like that. We have to assume they (and probably our Russian government friends, as usual busy as beavers in their never-ending quest to make life hard for Western countries. All without having to resort to nuclear weapons, since using those is messy and is regarded as a real faux pas in polite company) have already penetrated as many targets as they possibly can, since they assume that Microsoft will finally fix this vulnerability.

Indeed, Microsoft did issue a patch for the vulnerability last Tuesday. However, on Wednesday a researcher demonstrated online how exploits could bypass the patch. So it seems we’re not out of the woods yet.

Clearly, leaving important software companies – critical infrastructure, if the term has any meaning at all – to make all the decisions about when, or even if, they’ll patch important vulnerabilities isn’t working. This isn’t like your dry cleaners messing up one of your shirts. Both of the above failures have potentially huge consequences, just like SolarWinds did.

Maybe there should be fines that kick in X number of days after the company learns of a serious vulnerability, and increase every day that the vulnerability isn’t patched (and if there’s no way to patch the vulnerability for some reason, then the software company should order their vulnerable product to be taken down, and be required to compensate their customers for whatever damage this causes them).

With great power comes great responsibility. The companies are quite happy with the former, but they’re not so keen on the latter.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, July 11, 2021

The next Energy SBOM PoC meeting will be a big one

The next bi-weekly meeting of the SBOM Energy Proof of Concept will take place this Wednesday (the 14th) at noon ET. We have a stellar meeting set up, and I hope you can make it. If you haven’t joined our mailing list and would like to, send an email to SBOMenergypoc@inl.gov. But if you’d prefer your accustomed anonymity, you’re welcome to join us anyway.

We have two great guest speakers: Josh Corman of CISA, who – in his words – is “an early and ardent advocate for transparency, SBOM, and bringing proven supply chain principles into the modern software.” He will discuss the early days of the SBOM “movement” (cult?), as well as the use cases for SBOMs. His text (scripture?) will be this foundational document on use cases for SBOMs, which he took the lead in producing back in the mists of time forgotten. The document isn’t required reading for the meeting, but it’s an excellent paper, so I suggest you read it at some point if you have a serious interest (obsession?) with SBOMs.

Josh will also discuss the use cases for the Healthcare PoC, which started in 2018. Of course, Josh was very involved in getting that off the ground. The other speaker will be Allan Friedman of NTIA, who will discuss the history of SBOMs at the NTIA. We’ll have a little time for Q&A at the end (although, as usual, I’m sure there will be a lively set of questions and answers in the chat).

Here’s the connection information. See you then!

Teams link: https://teams.microsoft.com/l/meetup-join/19%3ameeting_MDU1NGVlMGUtZmIwYi00OWUxLWIxZjItNjc5ZDY4ODJlMzI4%40thread.v2/0?context=%7b%22Tid%22%3a%22d6cff1bd-67dd-4ce8-945d-d07dc775672f%22%2c%22Oid%22%3a%22a62b8f72-7ed2-4d55-9358-cfe7b3e4f3ed%22%7d 

Dial-in: +1 202-886-0111,,114057520#  

Other Numbers: https://dialin.teams.microsoft.com/2e8e819f-8605-44d3-a7b9-d176414fe81a?id=114057520

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, July 8, 2021

Was Kaseya a supply chain attack? Definitely!

I started my previous post with this sentence: “With the Kaseya attacks, we have another blockbuster supply chain attack like SolarWinds.” However, I pointed out that I would discuss the attack in a subsequent post. Here it is.

After I wrote the previous post, I began to question whether this really was a supply chain attack. It certainly wasn’t, if you take the view (which I took until a day or two ago) that a supply chain attack on software had to be the result of a deliberate insertion of malware or a backdoor into a software product, which is of course exactly what happened with SolarWinds.

The fact that the Russian attackers (this time part of the Russian state, not the fast-growing Russian hacking industry, although it’s in fact very hard to tell the difference between the two) were able to plant the malware in the SolarWinds Orion builds means there was some deficiency on the part of SolarWinds that let them do that. And if the supplier might have prevented the attack through their actions (even though it might have been hard to do), that’s a supply chain attack.

By this view, if an attacker simply takes advantage of a vulnerability in a software product after it is installed, that isn’t a supply chain attack – it’s simply a garden-variety attack on software. Those attacks happen all the time. If the supplier has good vulnerability management and patching policies, they can’t prevent new vulnerabilities from emerging – only patch them quickly when they emerge or take other mitigation measures if they can’t be patched quickly. And they can make sure their developers understand secure software development principles, so that new vulnerabilities don’t spring up more than they need to (there’s no way to write software that’s guaranteed never to develop vulnerabilities, as researchers are continually discovering new ways in which seemingly innocuous lines of code actually constitute a vulnerability).

Then why do I say the Kaseya attack was a supply chain attack? It’s because the vulnerability was a zero-day, and the attackers may have learned of it through eavesdropping on Kaseya’s communications with the Dutch firm that discovered the vulnerability and notified them of it. But, if it’s not the case that their communications were breached (and this is just speculation in something I read), how could Kaseya possibly be responsible for the fact that they were subject to a zero-day vulnerability?

And here’s where it gets subtle: There are ways that a software supplier can learn of zero-day vulnerabilities, including maintaining good relationships with the security researcher (i.e. white hat) community and offering bug-bounty programs. Moreover, they can move very quickly to patch any zero-day that they learn about, vs. following the natural inclination to think “This isn’t publicly known yet, so we have at least a little time to deal with this.”

Did Kaseya have any of these policies in place? I don’t know about the relationships with security researchers or bug bounty programs, but I do know that they hadn’t been able to produce a patch for the vulnerability (and still may not have, according to the report I read in the Wall Street Journal today), despite being told about the vulnerability at least a few days before the successful attack. That’s why I say the Kaseya attack was a supply chain attack.

However, there’s another “level” to this attack. The reason that so many organizations (1,500, by the last estimate I read) were compromised by ransomware was because at least some of Kaseya’s own customers were MSPs. The attackers were able to compromise an MSP’s customers because they had compromised the MSP itself. So this was a true two-level supply chain attack, the first I’ve heard of. What’s next?

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, July 5, 2021

Russia has become a pirate state. Let’s treat it like one.

With the Kaseya attacks, we have another blockbuster supply chain attack like SolarWinds (the two best articles I’ve read about it so far are here and here). However, there’s one big “improvement” in this attack. It wasn’t conducted primarily for espionage purposes, like SolarWinds, but rather for good old-fashioned financial gain. In fact, the Kaseya attack combined the two biggest cybersecurity threats today: supply chain attacks and ransomware.

I will have a lot to say about the attack itself in a coming post, but now I want to describe what went through my mind when I first read about the Kaseya attack on Saturday:

1.      Great, now we have supply chain ransomware attacks! That means we have to beef up our defenses for both supply chain and ransomware attacks even more that we’re already beefing them up – after SolarWinds and Colonial Pipeline. Essentially, the Kaseya attack is a SolarWinds-style proliferation of Colonial Pipeline attacks.

2.      Kaseya said that “only” 50-60 of their customers had been affected, but some of them were MSPs – and it seems a lot of the MSPs’ customers were affected as well. So this attack was even more efficient than Solar Winds, which wasn’t a “two-tier” attack like this one. Of course, this is a great force multiplier for supply chain attacks. Each tier of attacks you add can result in an exponential increase in the number of victims. And when you’re talking about ransomware, you’re probably talking about some pretty big money, even with “just” two tiers, as in Kaseya. Who says Russia isn’t making technological progress? We’ll probably have 3- or 4-tier attacks in a year or so.

3.      Of course, we all know that supply chain and ransomware attacks aren’t a problem that can be “solved” – only made somewhat less bad than they are. So am I expecting there will be a lot of improvement, now that we know how serious the threat is? This may shock you, but…No.

4.      However, there’s one common trait running through the worst of the recent attacks, including Kaseya, Colonial Pipeline (which wasn’t technically a supply chain attack), JBS (also not technically a supply chain attack), and SolarWinds: They all originated in Russia. SolarWinds was a government job, but the other three seem to be attributable to the Russia ransomware-for-a-service gang REvil.

My conclusion on Saturday: The problem of Russian cyberattacks is mushrooming. I thought the fact that – according to the FBI and CIA - Russia has planted malware in the US power grid and can cause outages whenever it wants was bad and would prompt some strong response (or at least an investigation, for God’s sake). Then I thought SolarWinds would prompt a strong response. There was a response many months later, but it obviously wasn’t strong enough.

Recently, Biden warned Putin in Geneva that he had to root out REvil. I’m sure Putin nodded and agreed with Biden that he’ll do everything he can to discover and punsh such evildoers. But it’s well known that it’s almost impossible to tell where the Russian cybercriminals end and the Russian security services begin, and vice versa. Plus the criminal gangs have provided Putin immense personal help in amassing and protecting the maybe $50 billion he’s managed to scrape together from his modest government salary (I hear he clips newspaper coupons all the time). Expecting Putin to crack down on REvil is about like expecting Donald Trump to give up golf – it just ain’t gonna happen.

Of course, Putin disclaims any responsibility for what private citizens may do, and after all he’s just president, not king. If he can’t find the REvil people, that’s unfortunate. However, Putin seems to do a great job of rooting out evildoers when the “evil” they’re doing is speaking the truth about what’s going on in Russia today. Just ask Alex Navalny, if you can talk to him when they’re not torturing him.

There’s a good historical precedent for taking strong action against a pirate nation. In the early 1800’s, the US was subject to “ransomware” attacks from the Barbary states of North Africa, whose pirates were attacking US ships and holding their crews for large sums (President Jefferson refused to pay, perhaps because he didn’t have easy access to bitcoin). We fought two wars with them and beat them. The attacks ended.

Am I suggesting that we go to war with Russia over this? No. How about a devastating cyberattack on them, say bringing down their power grid? Again, no. Any attack like that could lead to war, and in any case, we’re not going to conduct an attack that could kill civilians (which shows how ridiculous the idea is that we’re somehow protected against Putin taking down our grid by the fact that we could take down Russia’s grid. We’ll never retaliate in kind for a kinetic cyberattack).

There are lots of things we could do to punish Russia for these attacks. One would be to finally take a step that was talked about before the SolarWinds sanctions in April: Prohibit US citizens and financial institutions from holding any Russian debt, not just from buying newly-issued debt, as was required in April. The April prohibition is ridiculously easy to circumvent. We now need to do something that’s really going to get Putin’s attention.

There’s a lot more that could be done. Perhaps it’s time to freeze all Russian assets in the US or prohibit any financial transactions with Russian citizens or businesses? Or take some sort of action to limit Russia’s internet connections with the rest of the world (although I’m having a hard time thinking of something that couldn’t be easily bypassed)? Of course, these are drastic measures, and will hurt both American and especially Russian citizens. Regarding the former, I agree it’s unfair to them, but it’s also unfair that American companies are paying big money to the Russian ransomware gangs. Once Uncle Vlad takes serious action against those gangs (and agrees to end his own security services’ cyberattacks), we can think about lifting the sanctions.

What’s certain is that these actions will hurt ordinary Russians a lot. That’s unfortunate, but believe it or not, Putin is only in power because he keeps winning elections. Sure, they’re rigged by the fact that he makes certain to keep anyone who might be a serious threat – like Navalny – from running against him. But he does – or at least did before Covid-19 – enjoy a lot of support from the nationalists who like to see him push around the US and Europe (to say nothing of Ukraine and Georgia).

These people need to be made to understand that inflicting suffering on another nation can go both ways. So maybe they’ll think twice before they go into the voting booth next time. Even better, they’ll make it clear that they’re only going to suffer so much in order to see Putin stay in power. It’s time for him to make plans for his exit. And if he doesn’t want to leave, he’ll need to take the steps that are required for Russia to be treated like something other than the pirate state that it is.

And while I’m on the subject of drastic actions, what about the actions Russia took that resulted in a civilian airliner being shot down – by a Russian proxy army – over the Ukraine in 2014? Russia has never been held accountable for that, or paid – as far as I know – a dime to any of the victims’ families. Even though the Dutch government (the flight was from Amsterdam to Kuala Lumpur, Malaysia) found in 2018 that the Russians were responsible, and are now supposedly pursuing “legal actions”. Those are obviously going nowhere fast.

I said after the plane was shot down – when there was lots of photographic and voice recording evidence that this was Russia’s fault, and a Russian MP had already confirmed that it was – that Russian planes should be barred from all airspace worldwide until the Russian government has paid a fair amount to every victim’s family, and when all costs to Malaysia Airlines, the Dutch and Ukraine governments, and other parties have been paid in full. Let’s do that now, too. My guess is this might speed the “legal process” up a bit.

Let’s stop pretending that pirates are entitled to some sort of due process, or “fair trial”. If they were interested in fairness, they wouldn’t be treating the rest of the world like they are.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Saturday, July 3, 2021

The looming sofa hack danger – now revealed!

Since last year, I’ve been regularly re-posting almost all of my blog posts on Energy Central. This has been beneficial not just because it brought a lot of new regular readers, but also because I’ve received a lot of very interesting comments, which have led to some great discussions.

This is exactly what happened with this post when I posted it on Energy Central earlier this week. The post contained a line “A transformer or substation could no more be subject to an “Aurora-type” attack than my living room sofa could.” Of course, I chose my living room sofa in that sentence, since I was trying to think of something that would never have a microprocessor, and therefore would never be subject to cyberattack.

Sure as shootin’, Bob Meinetz commented “I recently purchased a $2,300 Bluetooth-compatible sofa with adjustable reclining features. Please advise if there's a possibility Russian operatives could cause it to snap shut while I'm kicking back, watching an episode of "Love Island" - leaving loved ones to find naught but a hand sticking out from the cushions and a half-eaten hot wing on the carpet. That is NOT the way I want to go!”

My reply to Bob was “I think you should consider returning that sofa. It sounds like too much of a risk to me.”

But Bob wasn’t just being light-hearted. He went on to comment “Digital security is absolute where there aren't digital components. I think many underestimate the value of "dumb" safety, of avoiding digital controls completely where possible, of simplifying control systems rather than making them more complex. Complexity is often justified by convenience - and convenience always, always, always comes at a cost in security.”

My dead-serious answer to him was “..there's no question that de-digitalization would increase security. And going back to horses would greatly reduce the number of car accidents. Is it likely that either one will happen soon?”

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.