Showing posts sorted by date for query russian malware. Sort by relevance Show all posts
Showing posts sorted by date for query russian malware. Sort by relevance Show all posts

Monday, September 25, 2023

NERC CIP: Will FedRAMP save us?

One of the most important current questions in the NERC CIP community is how and when it will be “legal” to deploy medium and high impact BES Cyber Systems (BCS) in the cloud. It’s important to note that there is no CIP requirement that explicitly forbids a NERC entity from deploying high or medium impact BCS in the cloud. Rather, the problem is that a cloud service provider (CSP) would never be able to provide the evidence required for the entity to prove their compliance with some of the most important CIP requirements in an audit.

There have been many suggestions on how a CSP could provide such evidence. Perhaps the suggestion that comes up most often is something to the effect of, “Why don’t the auditors just allow the cloud service provider’s FedRAMP (or SOC 2) certification to constitute evidence that they maintain an equivalent or better level of security practices to what is required by the CIP requirements that apply to medium and/or high BCS?”

Of course, there isn’t much dispute whether the overall level of security maintained by the large CSPs, especially in the portion of their cloud infrastructure that is compliant with FedRAMP, is better than the level of security required by the NERC CIP requirements that apply to medium and/or high impact BCS: without much doubt, it is better. But, as everyone involved with NERC CIP compliance knows all too well, what matters is whether the entity – or in this case, the CSP acting on their behalf – has complied with the exact wording of each requirement. For prescriptive requirements like CIP-007 R2 (patch management) and CIP-010 R1 (configuration management), it would simply be impossible for a CSP to do that.

However, there are some requirements for which it might be possible for a CSP to provide compliance evidence. These are what I call risk-based requirements, although they don’t all use the word “risk”. Examples of this are CIP-007 R3 (malware prevention), CIP-010 R4 (Transient Cyber Assets), and CIP-011 R1 (information protection program). If the evidence were designed to show that the NERC entity has developed a plan with the CSP how to address the risks applicable to the requirement in question and the CSP has carried out the plan, then my guess is it might be accepted, without having to change the CIP requirements at all.

But the fact that there are some requirements for which it might be possible to provide appropriate compliance evidence is irrelevant in the big picture. There is simply no way the NERC entity would be found compliant with the prescriptive requirements if they deployed medium and/or high impact BCS in the cloud. And no NERC entity is going to agree to do something that is guaranteed to make them non-compliant with even one requirement.

So is the solution to rewrite all the CIP requirements so they are all risk-based and can easily be “made to fit” with the cloud? This is essentially what the CIP Modifications Standards Drafting Team started to do in 2018 (although they were targeting virtualization at the time, not the cloud) – that is, until some large NERC entities made it clear they didn’t want to have to toss their entire CIP compliance programs – with all of the software, training, etc. they had invested in for CIP compliance – and start over. I described this sad story in this post in 2020. Thus, it’s safe to say that requiring all NERC entities to rewrite their CIP compliance programs is a non-starter.

However, a recent SAR (standards authorization request) that was submitted to the NERC Standards Committee proposed a way around this problem: What if there were two CIP “tracks”? Track 1 would consist of exactly the same requirements that are in place today, but it would be made clear that they only apply to on-premises systems, not to systems deployed in the cloud. NERC entities that have any systems deployed on premises would follow that track for those systems.

Any entity that doesn’t want to place any BCS in the cloud would just follow the first track – so literally nothing would have to change in their CIP compliance program.  However, if an entity deploys any BCS in the cloud, they would follow a second compliance track for those systems (and also follow the first track for their on-premises systems). That track might start with a requirement that the CSP has an appropriate certification. There would then be requirements that aren’t found in the first CIP track because they only apply to CSPs, for example:

1.      The entity needs to demonstrate that the CSP has developed a plan to comply with the “Paige Thompson problem” – namely, the fact that a technical person who had recently been fired by their CSP was able to utilize her knowledge of a common customer misconfiguration to penetrate the cloud accounts of at least 30 customers of the CSP (by her reckoning), one of which was Capital One.

2.      The entity needs to demonstrate that the CSP adequately verifies the level of cybersecurity of third parties that sell services that run in the CSP’s cloud (the fact that one such access broker didn’t have good security may have led to the Russian attackers being able to penetrate the SolarWinds infrastructure and plant their malware in seven updates of the Orion platform in 2019 and 2020).

There are probably a lot more “uniquely cloud” risks that should be addressed in CIP requirements, which only apply to cloud based BCS. It’s not likely that FedRAMP has requirements that deal with these risks, but if it does, then the FedRAMP audit report might possibly be used as evidence of the entity’s compliance with these requirements. All of these questions would need to be addressed by a standards drafting team tasked with examining the risks of deploying medium and high impact BCS in the cloud and designing appropriate CIP requirements to mitigate those risks. This would be the new second track for NERC CIP.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Tuesday, November 8, 2022

A star is born


For the last six months or so (maybe more), Brandon Lum of Google has been sometimes participating in two or three of the CISA SBOM workgroups, especially the VEX workgroup. His title has something to do with open source software, and since a lot of people in the workgroups are involved with OSS, I wasn’t surprised that he would be attending these meetings.

A little more than a month ago, he started making announcements in some of the meetings about a new Google project called GUAC and asking for people to participate in it. I didn’t pay a lot of attention to the details, but I knew it had something to do with software supply chain security, and with SBOMs in particular. Since open source software depends on volunteers to develop and maintain the projects, this wasn’t unusual, either.

Moreover, there was one reason why I deliberately didn’t pay a lot of attention to what Brandon was saying about GUAC: Last year, Google announced another project aimed at software supply chain security called SLSA, which has been very well received by the developer community. It’s essentially a framework that will allow developers to identify and take steps to prevent attacks on the software build process, which nobody (that I know of, anyway) even thought of before SolarWinds.

(SolarWinds fell victim to an extremely sophisticated 15-month attack conducted by – according to Microsoft’s estimate – about 1,000 people working out of Russia. There’s a really fascinating article on CrowdStrike’s website about SUNSPOT, the malware that the Russians purpose-built for this attack. In fact, they tested it during a three-month proof of concept conducted inside the SolarWinds software build environment, then deployed the malware for 5 or 6 months, without ever being detected. This was easily, along with Stuxnet, the most sophisticated malware ever developed. If only the Russians would start putting all that great expertise toward a good use, for a change! BTW, don’t try to understand everything in the article. It’s just amazing to see what Sunspot was able to do, all without any direct Russian control).

But I digress. When I heard Google was following a project called SLSA with one called GUAC, I found this a little too cute for my taste (can Google CHIPS be far behind?). So, frankly, I tuned Brandon out when he brought this up.

However, two weeks ago I saw a good article in Dark Reading – which linked to a great Google blog post - about GUAC. I also found out that my good friend Jonathan Meadows of Citi in London – a real software supply chain guru, although very focused on how ordinary schlumps like you and me (OK, maybe not you) can secure our software supply chains without having all of his knowledge and experience – was involved with GUAC from the get-go. These two datapoints convinced me that I should be paying a lot more attention to GUAC.

So I did. And this is what I found:

  1. The project intends to present to users a “graph database”, which in principle links every software product or intelligent device with all of its components, both hardware and software components, at all “levels”. You might think of the database as being based on a gigantic SBOM dependency tree that goes in all directions – i.e., each product is linked with all its upstream dependencies, as well as the downstream products in which it is a component (or a component of a component).
  2. One of the important functions of this database is to provide a fixed location in “GUAC space” (my term) for software products and their components. Artifacts necessary for supply chain analysis, like SBOMs and VEX documents, can be attached to these locations, making it easy for the user of a software product to learn what new artifacts are available for the product (actually, the nodes of the database are versions of products, not the products themselves).
  3. While the database will incorporate any artifacts created in the software supply chain, the three types of artifacts incorporated initially will be SBOMs, Google SLSA attestations, and OSSF Scorecards. The idea is that, ultimately, all the documents required for an organization (either a developer or an end user organization) to assess their software supply chain security will be available at a single internet location (and I don’t think the location would change – just its attributes).
  4. The artifacts can be retrieved by GUAC itself – the supplier of the artifact won’t have to put it in place “manually”. Google says, “From its upstream data sources, GUAC imports data on artifacts, projects, resources, vulnerabilities, repositories, and even developers.”
  5. Artifacts like SBOMs can be contributed and made available for free, but they don’t have to be. The Google blog post says, “Some sources may be open and public (e.g., OSV); some may be first-party (e.g., an organization’s internal repositories); some may be proprietary third-party (e.g., from data vendors).” In other words, a vendor that has prepared documents or artifacts related to security of a product can attach a link to the product in GUAC space. Someone interested in one of those artifacts can follow the link and, if they agree to the price, purchase it from the vendor.
  6. Thus, one function of GUAC can be enabling a huge online marketplace. However, unlike most markets related to software, the user won’t have to search on the product name to find what’s available for it. Instead, the user will just “visit” the fixed location for the product and look through what’s available there.

I can imagine the inspiration for GUAC might have occurred when some Google employee involved in software supply chain security grew frustrated at the number of different internet locations they had to visit to get the artifacts they needed to analyze just a single product. Instead of a person running ragged while searching for the most up-to-date artifacts, how about having the computer do the legwork in advance? The user would just have to go to the right location, to find everything they need in one place and with one search.

In fact, Google has done this before. Those of you who were involved with computers in the later 1990s (once the internet had supposedly arrived, but was often proving to be slower than just doing some things by hand) may remember that the search engines – Yahoo, DEC’s Alta Vista, etc. – just searched for character strings. If you were good (plus lucky) and entered a string that would return you just the items you were interested in but no others, searching was a pleasant experience.

However, if you weren’t good and lucky, and just searched on a topic like “mountain vacation”, you would get hundreds of pages of results, with no assurance that what you were really looking for wasn’t on the very last of those pages. Google came out with a very intelligent search engine that used all sorts of tricks – like ranking results by their popularity with others – to make it more likely that what you were looking for would be on the first page or two. The rest, as they say, is history.

I certainly don’t think GUAC will have anywhere near the success that the search engine had. After all, just about everyone in the world can use a general search engine, but only a small fraction of the world’s inhabitants are involved with software supply chain security – although given how I spend my time nowadays and the people I meet online, I’m sometimes tempted to think it’s actually a very large percentage.

My guess is that in five years, anyone involved in software supply chain security will be spending a lot of their time navigating the highways and byways of the GUAC world. For some, the need to do this will become apparent sooner rather than later. For example, if you’re involved with one of the companies for which distribution of SBOMs is an important part of the business model, you should be figuring out how you can incorporate GUAC into that model – although I’m certainly not saying you should abandon whatever you’re doing, or planning to do, now.

Another idea: While I haven’t tried to look at tech specs on the project yet, I’m sure there must be some sort of fixed address for a software product (or a component, which of course is just a product that’s been incorporated into another product) within GUAC world. I can see that address becoming a kind of universal name repository for a software product, which of course can have multiple names over its lifecycle. Currently, if you have an old version of a product whose name has subsequently changed, and you want to learn about vulnerabilities that have recently been reported for the product, you’re out of luck, unless you happen to know the current name of the product. That (and a lot of other things) may change with GUAC.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Wednesday, March 23, 2022

Here’s an idea: Let’s investigate the threats we know are real. We can leave the highly unlikely ones for another day.

On Wednesday, I was quoted in a good article by Robert Walton in Utility Dive, about an FBI bulletin announcing “abnormal scanning” of five electric utilities and 18 other critical infrastructure organizations from Russian IP addresses. Cue the scary music.

My primary reaction to this was, does the FBI think this is news? Since just about every big utility in the country is scanned probably thousands of times an hour (and I’m sure a lot of those scans come from Russia), the fact that five of them are now getting a few extra scans from Russia doesn’t make me want to check my flashlight batteries and lay in a store of MREs for the coming dark days. And given that it would be nearly impossible for the Russians (or anybody else) to cause an outage through a direct attack from the internet, I’m not worried about what would happen, even in the unlikely event that the Russians did break through the firewalls.

But if FBI Director Christopher Wray is worried about the Russians attacking the grid, he might want to go back to something that he and Gina Haspel, then Director of the CIA, said in the Worldwide Threat Assessment briefing to Congress in January 2019: "Russia has the ability to execute cyberattacks in the United States that generate localized, temporary disruptive effects on critical infrastructure — such as disrupting an electrical distribution network for at least a few hours.”

In other words, in 2019, Director Wray implied that the Russians had planted malware in the grid and could cause outages anytime they want to. Yet it seems neither the FBI nor anyone else in the federal government ever investigated these statements, since there were never any reports or briefings (classified or unclassified) to the power sector of any kind (whereas after the first Ukraine grid attack in 2015, there were classified and unclassified briefings across the country, as well as some very good reports).

Which leads me to believe that, if the malware was there in January 2019 (and the former Deputy Director of the NSA told a similar story in May 2019, although with much larger numbers), it’s still there now. Why would the Russians want to knock themselves out trying to break through the firewalls of large utilities, if they can just activate their malware and cause some big outages?

And Director Wray can’t blame the lack of an investigation on his predecessor!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Tuesday, March 1, 2022

The myth of “deterrence”


Last Friday, Politico ran a well-researched article about the Russian threat to the US grid. I was quoted saying that it would be extremely difficult for the Russians to cause a widespread grid outage, in large part because of NERC’s standards. And here I meant not just the NERC CIP cybersecurity standards, but the other NERC standards that focus on measures needed to keep the grid reliable. So events like the ones that led to the 2003 Northeast Blackout, whether or not induced by a cyberattack, simply wouldn’t be able to cause an event anything like that one, if those same events occurred today.

I also pointed the reporter to the 2019 Worldwide Threat Assessment, and made the same point to her that I made in this recent post: that if the US is really worried about Putin attacking our grid, it would be a good idea to thoroughly investigate these statements. If it turns out they’re completely wrong, then great…let’s hear about it. But if it turns out they’re right and there is Russian malware planted in grid control centers (since those would be the best points from which to cause outages. Attacking a substation – even taking one out completely – is highly unlikely to cause any outage at all, and certainly not a cascading one. The 2013 Metcalf substation attack demonstrated this).

But I was concerned about the article’s discussion of possible US “deterrence” of a Russian cyberattack on the grid by threatening to respond in kind, presumably with a cyberattack on our own (and the article repeats previous reports that the US has planted plenty of malware in the Russian grid. I don’t doubt that those reports are true, although I don’t have a way of knowing that). This idea is based on flawed logic:

1.      The Russians attack the US grid and cause serious outages that lead to some loss of civilian lives.

2.      We turn around and cause an even more serious outage in Russia, with presumably even more loss of civilian lives.

The fact is that this scenario will never happen. The US is never going to launch any sort of attack directly targeting civilians in another country, unless we’re actually at war with the country. No matter what Russia is doing in Ukraine, we’re simply not going to “launch a devastating cyberattack” on their grid, no matter what sort of cyberattack they launch on ours. Instead, we’ll impose other non-lethal punishment on the country – beyond what we’ve already imposed. Sure, impoverish the people. But don’t kill them.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, January 27, 2022

I hate to be a pest, but…

On Friday the 28th, I’ll be Chris Blask’s guest on his interview show at 2PM EST. Chris is quite an interesting guy, while I’m a relentless scold (see below). So it should be interesting. I don’t know what we’ll talk about, but I think it might have something to do with SBOMs. But knowing Chris, it might have something to do with boats. Or maybe both. If you can’t make it on the 28th, it will be available on YouTube next week; I’ll publish the link when I get it. 

Perhaps you’ve read something about how Vladimir Putin, my favorite dictator/kleptocrat/cybercriminal, is now threatening the Ukraine with invasion – although it seems he forgot to bring more than half of the army he will need to conduct a successful invasion. On the other hand, maybe he’s emulating George W Bush, who forced Army Chief of Staff Erik Shinseki to retire in 2003, after he predicted that “several hundred thousand troops” would be needed to pacify Iraq if we invaded. Bush invaded with about half that number.

That move didn’t work out very well, so for that reason I think the Ukrainians can sleep fairly peacefully in their beds, knowing that Putin doesn’t intend to invade with the 100,000 troops he’s arrayed now. From the ruthless giant that I (and everyone else in the US, it seems) believed Russia to be up until the Soviet Union fell, Russia has now become The Mouse that Roared. Plus, he’s made it clear that he won’t miss the opening of the Winter Olympics in Beijing in two weeks – hardly a sign that the tanks will be rolling anytime soon.

But just because he won’t invade doesn’t mean that Putin won’t cause a lot of trouble for Europe and the US, using his favorite “hybrid warfare” tactic: hard-hitting cyberattacks, with the power grid being the favorite target. So it might be expected that he’ll turn his attention back to the grid he loves to attack over all others – yes, even over Ukraine’s: that’s the US grid.

Fortunately for Uncle Vlad, he’s been diligently seeding the US grid with the malware he knows will come in handy on a rainy day – and that day may well be coming very soon. How do I know he’s planted this malware? Consider the people who have been saying that:

1.      The directors of the FBI and CIA, in their Worldwide Threat Assessment in January 2019.

2.      Vikram Thakur of Symantec, in the Wall Street Journal in January 2019.

3.      The former deputy director of the NSA, in May 2019.

4.      The WSJ in November 2019.

With all these people waving a red flag, what has been done to investigate these reports of the Russians planting malware in our grid (and likely in control centers, since they were said to be in a position to cause outages)? After all, when the Russians attacked Ukraine’s grid in 2015 and 2016, US investigators were as thick as flies over there – and they came back and gave a whole series of classified and unclassified briefings in cities across the US. Wouldn’t you expect that there would have been a similar investigation here, along with briefings for utilities, to tell them how to remove the malware? After all, isn’t the US grid much more important to us than Ukraine’s?

One would think so. But nothing ever happened. No briefings, classified or unclassified. No high level reports. No red alerts to the industry. No Facebook posts. No ads on milk cartons. Nothing.

So I have to assume either that all of the above people are boldface liars, or the Russian malware is still sitting in those control centers, waiting for the Dark Lord in his Dark Tower in Moscow to raise his hand…

Have a good night! And make sure your flashlight has batteries.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they necessarily shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, July 8, 2021

Was Kaseya a supply chain attack? Definitely!

I started my previous post with this sentence: “With the Kaseya attacks, we have another blockbuster supply chain attack like SolarWinds.” However, I pointed out that I would discuss the attack in a subsequent post. Here it is.

After I wrote the previous post, I began to question whether this really was a supply chain attack. It certainly wasn’t, if you take the view (which I took until a day or two ago) that a supply chain attack on software had to be the result of a deliberate insertion of malware or a backdoor into a software product, which is of course exactly what happened with SolarWinds.

The fact that the Russian attackers (this time part of the Russian state, not the fast-growing Russian hacking industry, although it’s in fact very hard to tell the difference between the two) were able to plant the malware in the SolarWinds Orion builds means there was some deficiency on the part of SolarWinds that let them do that. And if the supplier might have prevented the attack through their actions (even though it might have been hard to do), that’s a supply chain attack.

By this view, if an attacker simply takes advantage of a vulnerability in a software product after it is installed, that isn’t a supply chain attack – it’s simply a garden-variety attack on software. Those attacks happen all the time. If the supplier has good vulnerability management and patching policies, they can’t prevent new vulnerabilities from emerging – only patch them quickly when they emerge or take other mitigation measures if they can’t be patched quickly. And they can make sure their developers understand secure software development principles, so that new vulnerabilities don’t spring up more than they need to (there’s no way to write software that’s guaranteed never to develop vulnerabilities, as researchers are continually discovering new ways in which seemingly innocuous lines of code actually constitute a vulnerability).

Then why do I say the Kaseya attack was a supply chain attack? It’s because the vulnerability was a zero-day, and the attackers may have learned of it through eavesdropping on Kaseya’s communications with the Dutch firm that discovered the vulnerability and notified them of it. But, if it’s not the case that their communications were breached (and this is just speculation in something I read), how could Kaseya possibly be responsible for the fact that they were subject to a zero-day vulnerability?

And here’s where it gets subtle: There are ways that a software supplier can learn of zero-day vulnerabilities, including maintaining good relationships with the security researcher (i.e. white hat) community and offering bug-bounty programs. Moreover, they can move very quickly to patch any zero-day that they learn about, vs. following the natural inclination to think “This isn’t publicly known yet, so we have at least a little time to deal with this.”

Did Kaseya have any of these policies in place? I don’t know about the relationships with security researchers or bug bounty programs, but I do know that they hadn’t been able to produce a patch for the vulnerability (and still may not have, according to the report I read in the Wall Street Journal today), despite being told about the vulnerability at least a few days before the successful attack. That’s why I say the Kaseya attack was a supply chain attack.

However, there’s another “level” to this attack. The reason that so many organizations (1,500, by the last estimate I read) were compromised by ransomware was because at least some of Kaseya’s own customers were MSPs. The attackers were able to compromise an MSP’s customers because they had compromised the MSP itself. So this was a true two-level supply chain attack, the first I’ve heard of. What’s next?

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, July 5, 2021

Russia has become a pirate state. Let’s treat it like one.

With the Kaseya attacks, we have another blockbuster supply chain attack like SolarWinds (the two best articles I’ve read about it so far are here and here). However, there’s one big “improvement” in this attack. It wasn’t conducted primarily for espionage purposes, like SolarWinds, but rather for good old-fashioned financial gain. In fact, the Kaseya attack combined the two biggest cybersecurity threats today: supply chain attacks and ransomware.

I will have a lot to say about the attack itself in a coming post, but now I want to describe what went through my mind when I first read about the Kaseya attack on Saturday:

1.      Great, now we have supply chain ransomware attacks! That means we have to beef up our defenses for both supply chain and ransomware attacks even more that we’re already beefing them up – after SolarWinds and Colonial Pipeline. Essentially, the Kaseya attack is a SolarWinds-style proliferation of Colonial Pipeline attacks.

2.      Kaseya said that “only” 50-60 of their customers had been affected, but some of them were MSPs – and it seems a lot of the MSPs’ customers were affected as well. So this attack was even more efficient than Solar Winds, which wasn’t a “two-tier” attack like this one. Of course, this is a great force multiplier for supply chain attacks. Each tier of attacks you add can result in an exponential increase in the number of victims. And when you’re talking about ransomware, you’re probably talking about some pretty big money, even with “just” two tiers, as in Kaseya. Who says Russia isn’t making technological progress? We’ll probably have 3- or 4-tier attacks in a year or so.

3.      Of course, we all know that supply chain and ransomware attacks aren’t a problem that can be “solved” – only made somewhat less bad than they are. So am I expecting there will be a lot of improvement, now that we know how serious the threat is? This may shock you, but…No.

4.      However, there’s one common trait running through the worst of the recent attacks, including Kaseya, Colonial Pipeline (which wasn’t technically a supply chain attack), JBS (also not technically a supply chain attack), and SolarWinds: They all originated in Russia. SolarWinds was a government job, but the other three seem to be attributable to the Russia ransomware-for-a-service gang REvil.

My conclusion on Saturday: The problem of Russian cyberattacks is mushrooming. I thought the fact that – according to the FBI and CIA - Russia has planted malware in the US power grid and can cause outages whenever it wants was bad and would prompt some strong response (or at least an investigation, for God’s sake). Then I thought SolarWinds would prompt a strong response. There was a response many months later, but it obviously wasn’t strong enough.

Recently, Biden warned Putin in Geneva that he had to root out REvil. I’m sure Putin nodded and agreed with Biden that he’ll do everything he can to discover and punsh such evildoers. But it’s well known that it’s almost impossible to tell where the Russian cybercriminals end and the Russian security services begin, and vice versa. Plus the criminal gangs have provided Putin immense personal help in amassing and protecting the maybe $50 billion he’s managed to scrape together from his modest government salary (I hear he clips newspaper coupons all the time). Expecting Putin to crack down on REvil is about like expecting Donald Trump to give up golf – it just ain’t gonna happen.

Of course, Putin disclaims any responsibility for what private citizens may do, and after all he’s just president, not king. If he can’t find the REvil people, that’s unfortunate. However, Putin seems to do a great job of rooting out evildoers when the “evil” they’re doing is speaking the truth about what’s going on in Russia today. Just ask Alex Navalny, if you can talk to him when they’re not torturing him.

There’s a good historical precedent for taking strong action against a pirate nation. In the early 1800’s, the US was subject to “ransomware” attacks from the Barbary states of North Africa, whose pirates were attacking US ships and holding their crews for large sums (President Jefferson refused to pay, perhaps because he didn’t have easy access to bitcoin). We fought two wars with them and beat them. The attacks ended.

Am I suggesting that we go to war with Russia over this? No. How about a devastating cyberattack on them, say bringing down their power grid? Again, no. Any attack like that could lead to war, and in any case, we’re not going to conduct an attack that could kill civilians (which shows how ridiculous the idea is that we’re somehow protected against Putin taking down our grid by the fact that we could take down Russia’s grid. We’ll never retaliate in kind for a kinetic cyberattack).

There are lots of things we could do to punish Russia for these attacks. One would be to finally take a step that was talked about before the SolarWinds sanctions in April: Prohibit US citizens and financial institutions from holding any Russian debt, not just from buying newly-issued debt, as was required in April. The April prohibition is ridiculously easy to circumvent. We now need to do something that’s really going to get Putin’s attention.

There’s a lot more that could be done. Perhaps it’s time to freeze all Russian assets in the US or prohibit any financial transactions with Russian citizens or businesses? Or take some sort of action to limit Russia’s internet connections with the rest of the world (although I’m having a hard time thinking of something that couldn’t be easily bypassed)? Of course, these are drastic measures, and will hurt both American and especially Russian citizens. Regarding the former, I agree it’s unfair to them, but it’s also unfair that American companies are paying big money to the Russian ransomware gangs. Once Uncle Vlad takes serious action against those gangs (and agrees to end his own security services’ cyberattacks), we can think about lifting the sanctions.

What’s certain is that these actions will hurt ordinary Russians a lot. That’s unfortunate, but believe it or not, Putin is only in power because he keeps winning elections. Sure, they’re rigged by the fact that he makes certain to keep anyone who might be a serious threat – like Navalny – from running against him. But he does – or at least did before Covid-19 – enjoy a lot of support from the nationalists who like to see him push around the US and Europe (to say nothing of Ukraine and Georgia).

These people need to be made to understand that inflicting suffering on another nation can go both ways. So maybe they’ll think twice before they go into the voting booth next time. Even better, they’ll make it clear that they’re only going to suffer so much in order to see Putin stay in power. It’s time for him to make plans for his exit. And if he doesn’t want to leave, he’ll need to take the steps that are required for Russia to be treated like something other than the pirate state that it is.

And while I’m on the subject of drastic actions, what about the actions Russia took that resulted in a civilian airliner being shot down – by a Russian proxy army – over the Ukraine in 2014? Russia has never been held accountable for that, or paid – as far as I know – a dime to any of the victims’ families. Even though the Dutch government (the flight was from Amsterdam to Kuala Lumpur, Malaysia) found in 2018 that the Russians were responsible, and are now supposedly pursuing “legal actions”. Those are obviously going nowhere fast.

I said after the plane was shot down – when there was lots of photographic and voice recording evidence that this was Russia’s fault, and a Russian MP had already confirmed that it was – that Russian planes should be barred from all airspace worldwide until the Russian government has paid a fair amount to every victim’s family, and when all costs to Malaysia Airlines, the Dutch and Ukraine governments, and other parties have been paid in full. Let’s do that now, too. My guess is this might speed the “legal process” up a bit.

Let’s stop pretending that pirates are entitled to some sort of due process, or “fair trial”. If they were interested in fairness, they wouldn’t be treating the rest of the world like they are.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Saturday, June 5, 2021

Software is developed everywhere. It's developed by everybody.


Many people – including the ones who wrote the recent Executive Order – think that an important component of software supply chain risk is due to provenance, meaning where the software came from. Of course, the idea behind that belief is certainly understandable: there has been a lot of concern about the idea of malicious actors in Russia, China, North Korea, Iran, Cuba, Venezuela – you name it – planting a backdoor or logic bomb in software used by government or private organizations in the US, causing havoc on the scale of the SolarWinds attacks.

This is why many people think it is important that critical infrastructure and government organizations in the US should inventory all of the software they use, as well as all of the components of the software they use (of course, you need SBOMs to do that!), and identify which of these originate in Bad Guy countries – or which might perhaps have been developed by an organization that is controlled by or under the influence of a Bad Guy country. Then they should at least consider removing any software they identify from their environment, or at least they should make sure they don’t buy any more of this Bad Guy software.

There’s only one problem with this way of thinking: with one exception, I know of no instance in which a Bad Guy nation-state has actually carried out a software supply chain attack on US interests, that could have been short-circuited by conducting such a provenance analysis (that one exception is of course Kaspersky, which at the time of the attack was located in Russia and whose founder had links to the Russian security services. They were alleged to be behind the attack on an NSA employee, who had stolen some of the NSA’s most potent malware - including the malware behind NotPetya - and stored it on his home computer, which ran Kaspersky’s antivirus software. Kaspersky swore up and down that this wasn’t the case, but I’m certainly willing to stipulate that the Russians had planted a backdoor in the Kaspersky software, which let them penetrate the NSA employee's computer and exfiltrate the NSA's malware weapons).

Of course, I’m sure there have been plenty of cases where a software company in a Bad Guy country has sold software that contains vulnerabilities to US customers. But every software company everywhere does that – there’s simply no way to ship vulnerability-free software. In fact, given that I’m sure most of the software used in the US is developed by American companies, it is quite likely that the biggest source of vulnerability-laden software to American organizations is…American software companies.

What about SolarWinds, you ask? They certainly shipped vulnerability-laden software to many companies and government organizations (18,000, in case you’re keeping score at home), in the US and abroad. And those vulnerabilities were planted by a Russian team of about 1,000 people (using Microsoft’s estimate) that deployed what may be the most sophisticated piece of malware ever. Surely this counts as a state-sponsored software supply chain attack that could have been prevented by provenance controls!

The problem with that narrative is that, the last time I checked, SolarWinds is a US company. Aha! But what about their suppliers? They used contract developers in Eastern Europe, for God’s sake. Surely they had a hand in this nefarious deed? That’s a good question, and one that I raised in one of my many posts after the SolarWinds attacks were discovered.

But as the article linked in the previous paragraph describes, those developers didn’t have their hands anywhere near this attack. It was carried out by servers located in the US that were controlled by the Russians, and the attack was on the SolarWinds Orion build environment, which was physically located in the US. The malware-laden Orion updates all were digitally signed by SolarWinds. How could provenance controls have prevented this attack?

There was another important Bad Guy connection that I read about in the New York Times in early January: SolarWinds was a big user – as are many other large US companies – of a development tool called JetBrains, that is developed and sold by a company headquartered in – get this – Moscow. In this post, I wondered (as the Times had) whether JetBrains might have been the vehicle for the attack on the SolarWinds build environment, although I stopped short of saying that was likely.

However, 12 days later – without any additional evidence – I said in another post “Of course, it was recently learned that the Russians did penetrate a very widely-used development tool called JetBrains. And one of JetBrains’ customers was in fact SolarWinds.” Three days later I received an email from a very polite public relations person for JetBrains in Moscow, asking me to please retract that statement, since JetBrains had just recently put out their own statement firmly denying any role in the attack. Of course I did that and apologized to JetBrains in a new post that day.

In that post, I pointed to a lack of care as the reason for my misstatement. However, four days – and more introspection – later, I confessed to the real reason: I was prejudiced against Russian companies because dontcha know they must all be captives of the Russian state, just like I believed Kaspersky was. And if I’d been in charge of those things, I might have banned all software sold by Russian companies from installation on important networks – which just goes to show that you shouldn’t put me in charge of those things.

The fact is that, if we start banning software from particular countries just because we think the governments of those countries are out to damage the US (and there isn’t a lot of doubt in my mind that the Russian government fits that description), we’ll end up damaging our own companies. JetBrains didn’t get its huge worldwide market share because it’s a so-so product; by all accounts (and I’m certainly not competent to judge this), it’s a very strategic tool for developers like Oracle and Microsoft (as well as SolarWinds, to be sure). Were we to prohibit US companies from buying JetBrains, we would be putting them at a competitive disadvantage vs. companies headquartered in other countries.

But most importantly, the whole idea that software “comes from” a particular country is now obsolete. Nowadays, software – other than perhaps software developed for DoD and the 3-letter agencies - is developed by teams of people from all over the world who collaborate online to develop the product. Sure, they might mostly (or even all) be employees of a company located in Country X, but a large percentage of them will almost undoubtedly not be citizens of X and probably not be located there, either.

Matt Wyckhouse of Finite State, in a presentation last fall, pointed out that Siemens – a huge German company that does a lot of business in the US, especially with critical infrastructure organizations – has 21 R&D hubs in China and employs about 5,000 Chinese staff there. Does this mean that Siemens software poses enough risk that you should consider removing it (along with whatever Siemens hardware it supports, of course) from your environment? After all, there’s a good chance that at least a portion of any Siemens software that you buy was developed in China.

If you’re DoD, the answer might be yes – i.e. DoD might decide (or have decided) not to accept that risk, however small it is – and to pay the undoubtedly high cost of finding and installing equally functional alternatives to whatever Siemens software they’re not buying. For almost any other company, the answer IMHO should be no. There’s simply no justification for subjecting your organization to the time and expense required to find an alternative to Siemens, given the very low provenance risk posed by their software.

So we all need to stop thinking of software supply chain risk as being somehow tied to where the software was developed, or the nationality of the people who developed it. If you’ll promise not to do that anymore, I’ll do the same. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Tuesday, March 9, 2021

The new administration gets it: Software is where the risks are!

Christian Vasquez of E&E News published on Monday a very good article that reviewed Anne Neuberger’s keynote address for the SANS (virtual) ICS Summit last Friday. In it, she said, according to the article, that “Biden is planning to unveil an executive action that aims at ‘building standards for software, particularly software that's used in critical areas.’”

I was quite happy when I read that (I hadn’t attended her keynote for the Summit on Friday). It’s been clear to me for quite a while that the real supply chain security risks – for any industry – are in software (in fact, Robert M. Lee said exactly this in his keynote to the Summit on Thursday). In fact, I know of no true hardware supply chain attack ever (and I’m not talking about an attack on firmware, as in the Mirai botnet. Firmware is just software installed on a different medium, and mitigating firmware risk is a lot like mitigating “regular” software risk) – that is, one in which the attackers altered the microcode in a processor.

I’m certainly not saying that there could never be a hardware supply chain attack, but we need to look at the record. On the software side, you have SolarWinds, Equifax, Delta Airlines and other very successful software supply chain attacks. And on the hardware side you have…rumors that the NSA might be altering microcode in network devices being sent to some foreign countries. On which side should you allocate your scarce time and money resources?

Ms. Neuberger mentioned two different concerns that the new order will address. The first is “in response to the massive Russian-linked espionage campaign that has affected nine agencies and around 100 organizations by exploiting a commonly used software product from Austin-based SolarWinds.” Unfortunately, that’s all the information she gave on this topic.

Here's what I’m hoping the EO will do regarding software supply chain security: Scrutinize the controls the supplier has on their software development environment. The appalling thing about SolarWinds is that the Russians penetrated the SolarWinds development network (they were presumably already in the IT network) in September 2019. As documented in a great article by Crowdstrike, they had free rein in that network, and were able to first plant “proof of concept” code (i.e. a non-malicious version of Sunburst) in two or three updates for the flagship Orion product.

Having demonstrated that they could do that undetected, they went on to build Sunspot, perhaps the most sophisticated malware since Stuxnet - or even more sophisticated that the latter - and installed it in the build network to guide the rest of the effort. In March 2020, Sunspot started planting Sunburst in Orion builds. This continued until June, by which time Sunburst was in about seven Orion updates that had been provided to customers. In June, the Russians decided the fun was over, covered their traces and pulled out.

But they didn’t do this because they’d been detected or feared being detected – they did it because they already had been so successful that they knew they weren’t ever going to be able to exploit more than a fraction of the 18,000 organizations that had downloaded a version of Orion containing Sunburst (it seems they actually used Sunburst to attack about 100 organizations). And indeed, SolarWinds never detected them. The world might never have known about Sunburst if some observant person at FireEye hadn’t noticed that an unauthorized device had been added to an account.

Clearly, SolarWinds dropped the ball on this, and there’s no reason to believe they won’t continue to drop the ball from now on. They – and a number of their peers, including the cloud providers – need to be regulated just like any other critical infrastructure. Because that’s what they are.

The other concern Ms. Neuberger mentioned was the need for monitoring of OT networks. Here, the example was the Florida water system that was compromised. The compromise wasn’t discovered by any sort of monitoring system, but only because an operator noticed his cursor was moving on the screen, even though his mouse wasn’t. This is another problem, but it’s probably much easier to solve than the first one.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, February 4, 2021

Don’t worry, Vlad. Your record’s safe.


There have been a number of news stories about the fact that the Chinese were able to exploit a vulnerability in SolarWinds Orion software to attack the US Dept. of Agriculture’s National Finance Center. Most of the articles have made it sound like the attack was déjà vu all over again: another supply chain attack on SolarWinds!

However, only a few articles (like this one) have gotten it right: This wasn’t a supply chain attack. This was just your garden-variety attack on software, in which some nefarious party exploits a vulnerability in software (and there are lots of vulnerabilities out there!) to penetrate an organization. The attack is on software that is already installed.

A supply chain attack on software usually starts long before the software is installed. In fact, in the case of the Russian attack on SolarWinds, it started about 15 months before the attack was discovered, while the software was being developed. The Russians planted the SUNBURST malware in updates to Orion, using the amazing SUNSPOT malware, which I described recently. This was the first stage of the attack.

The SUSNBURST malware then opened up a backdoor when the tainted update was installed on a customer’s network. It beaconed to the Russians that it was active, at which point they were able to exploit the backdoor to perform their dirty work. This was the second stage of the attack.

The big difference between the Russian and the Chinese attacks was that the latter was essentially equivalent to just the second stage. Both attacks exploited a vulnerability, but the vulnerability that the Chinese exploited existed in Orion before they attacked it. The vulnerability the Russians exploited was one they had placed there themselves. That’s why the vulnerability was called a backdoor.

If you read my post on SUNSPOT, you know that it was an exquisitely-designed piece of malware that rivaled Stuxnet. The Russians conducted a careful campaign that started with a proof of concept that placed a benign piece of code in a few Orion updates, just to make sure that could be done. Then they developed SUNSPOT and deployed it a few months later. After that, SUNSPOT had to run on its own for months inside the SolarWinds development environment, without any direct Russian intervention. Yet it was completely successful in placing SUNBURST in about seven or eight Orion updates.

Compared to the Russian campaign, the Chinese attack was a skirmish. The Russians penetrated maybe a few hundred targets (some very high value). They could presumably have penetrated another 17,750, since about 18,000 customers downloaded the tainted Orion updates, but they just didn’t have the time. Meanwhile, the Chinese penetrated one Orion customer: an agency inside USDA. Do you see the immense power of a supply chain attack, vs. a garden-variety software attack?

I’ve said at least twice (here and here) that Uncle Vlad is the king of supply chain attacks. The Chinese will never displace him!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, January 31, 2021

We need to regulate these guys – SolarWinds edition


There’s lots of good information coming out about the SolarWinds attacks, as well as others that were very likely carried out by the Unnamed-threat-actor-that-sure-sounds-a-lot-like-it’s-Russia, whoever that may be. All of this, along with information about other attacks (not all by Russia) that was made public long ago, points to the need to start regulating a bunch of tech companies that I hadn’t thought of as being part of the country’s critical infrastructure – until SolarWinds caused the scales to fall from my eyes. I’ll deal with SolarWinds in this post, and the other companies in a post in the near future (although tomorrow's topic is going to be different).

There’s been a lot of discussion of the SUNBURST malware, which was spread by SolarWinds updates and provided the backdoor that the Russians exploited to penetrate a lot of important networks, especially US government networks. Like most malware, SUNBURST used the “pray and spray” method of cyberattacks: a) attempt to get the malware into as many organizations as possible (which ended up being “only” 18,000, as SolarWinds said in their memorable SEC filing the day after the attack was announced); b) wait until the phone home beacons start appearing; c) choose the juiciest targets (hint: the National Nuclear Safety Administration is juicier than Joe’s Heating and Cooling Supply); and d) get to work reaping the fruits of their labors – which turned out to be abundant in this case, of course.

In the SolarWinds attacks, the benefits the Russians were looking for seem to have been limited to national security espionage. The Russians could easily have followed their normal pattern of creating as much chaos and destruction as possible – as in the 2016 US elections and especially NotPetya, which was a supply chain attack aimed at all businesses in the Ukraine. NotPetya had no goal other than bricking as many machines as possible as quickly as possible. It succeeded wildly not only in the Ukraine but even more so in other countries like the US and Denmark, where it almost sank (pun intended) Maersk, the largest ocean shipping company in the world.[i]

Fortunately, the Russians – perhaps out of some sense of pity for the American people, or more likely because they suspected a new sheriff might be coming to town, who would be much less forgiving of little foibles like SolarWinds or paying a bounty to kill American soldiers in Afghanistan, than was the previous sheriff – decided not to take the destruction route. Thank God for small favors, as my former boss used to say.

But there has been very little discussion of the SUNSPOT malware, which the Russians wrote to attack only one company: SolarWinds. And this malware wasn’t designed to do anything like what SUNBURST did: provide a backdoor so the Russians could exfiltrate lots of data from SolarWinds. It was designed to penetrate SolarWinds’ software build process and plant SUNBURST in software updates, so that SolarWinds would then kindly distribute them to their customers. And that attack succeeded brilliantly.

In fact, I nominate SUNSPOT as the second-most-sophisticated piece of malware ever written – after Stuxnet, which was created by the US and Israel to attack Iran’s uranium enrichment program, and also succeeded brilliantly (and it might even be ahead of Stuxnet. I’m certainly not in a position to judge between the two).

You can read the full story of SUNSPOT in this article by CrowdStrike, which worked with SolarWinds to analyze the malware and the attack. But here are some highlights, which relate to the subject of this post:

1.     The Russians first penetrated the SolarWinds software development environment in September 2019 – i.e. 15 months before SolarWinds or anyone else knew there was a problem.

2.      Like any good organization about to make a risky investment, the Russians first executed a test of a “beta” version of SUNSPOT. They wanted to see if they could in principle place a piece of software code in the SolarWinds Orion platform, so that it would be included in subsequent updates shipped to SolarWinds customers. They did this using a completely benign piece of code and the gambit worked perfectly. Just like later on with SUNBURST, SolarWinds had no clue this was happening. In fact, as with SUNBURST, this test code was inserted in multiple releases of the Orion platform, not just one.

3.     With the concept proven, the Russians started to build the final version of SUNSPOT, probably testing it in their own environment. About four months later in February, the Russians compiled and deployed this malware in the SolarWinds software build environment.

4.    The most important feature of SUNSPOT was that, just like Stuxnet, it was built assuming there was no possibility for any real-time intervention by the Russians in the software build process. The software had to act completely autonomously; yet it had to quickly adapt to any change in the environment. This was even harder than it sounds since, while SolarWinds wasn’t specifically looking for signs of an intruder inside the build process, the slightest change in the environment – such as a hash value for one of the Russians’ files (which were always named with names that matched files used in the build process itself) not matching exactly the value for the same file the previous day - would have set off alarms just in the normal build process. This would likely have led to SUNSPOT being discovered and the end of this whole affair (and we’d all have written about how astute SolarWinds was in heading off at the pass what could have been a series of devastating attacks on important government agencies).

5.     I won’t go into all of the brilliant features in SUNSPOT, but I’ll mention one: SUNSPOT was designed to surmount what may have been the biggest problem the Russians faced at SolarWinds: The software build process for the next Orion release went on for months, but neither the Russians nor SolarWinds had any idea exactly how long it would take. Moreover, the process was shut down and restarted every morning, and there was no way to know in the morning whether today might be the day that everything in the new build worked perfectly. If that happened, the SolarWinds engineers would probably decide to release the code as it stood at the end of that day. This meant that SUNSPOT needed to wipe away all traces of its activity every evening, once it became clear that the build process was about to shut down for the night without having produced the final product, since the slightest discrepancy – for example in the size of a file – would have alerted SolarWinds that something was wrong.

6.     Yet the next morning, SUNSPOT had to recreate everything that had been wiped away the previous evening, to prepare for the possibility that today would be the day that the code was declared ready to ship. This whole process is described in great detail in the CrowdStrike article. The important point is that all of this had to be done by SUNSPOT alone, without any prompting or guidance from the Russians - who had no real time visibility at all into the software build process. Just as the Americans and Israelis who created Stuxnet couldn't control it in any way once it was inside the Natanz uranium enrichment plant, which was completely air-gapped from the rest of the world (well, not quite. It turns out contractors were allowed to bring USB sticks in to aid their work, and that's how Stuxnet got in in the first place. It was also a classic supply chain attack, although different from SolarWinds).

7.      Despite these challenges, when SUNSPOT was deployed last February it worked perfectly. The SUNBURST malware was planted in about seven releases of the Orion platform, before the Russians decided to remove it from the build environment and cover all of their traces last June. I guess they figured that, with 18,000 infected targets, they just didn’t have enough good cyber resources available to exploit all of them, let alone any new ones that might be added. An embarrassment of riches, it seems.

What’s the lesson of all this? SolarWinds really dropped the ball. Sure, this was an unprecedented attack that nobody saw coming. But it’s impossible to believe there was no way they could have detected the Russians in their network during the 15 months between when they first entered and when SolarWinds learned about it. Unforunately, SolarWinds only learned they had been penetrated when the rest of the world learned it: after FireEye discovered they’d been compromised through SolarWinds last month.

It’s also impossible to believe there’s no way SolarWinds could have learned that their development process was compromised, when there were about 5-6 months during which one or the other of the two pieces of Russian code (the benign test code and SUNBURST) was being inserted into every Orion update that left their shop.

Do I know how SolarWinds could have detected – and presumably stopped, since that would have been very easy had their presence been known – the Russians? No I don’t. But there’s one thing that I do know (although only with hindsight, I’ll readily admit): SolarWinds, in accepting license fees from huge companies and important government agencies like DoE, DHS and NSA, should have been paying a lot more attention to cybersecurity than they did. They were clearly fat, dumb and happy and on top of the world – until they weren’t.

Companies like SolarWinds are really critical infrastructure organizations (again, I say this with hindsight. I wouldn’t have said it at all two months ago). The entire public has a stake in their safe, successful operation; that stake goes well beyond the license fees that SolarWinds earns. SolarWinds can’t hide behind the “Well, if you don’t like our product, nobody’s forcing you to buy it” rationale. Just as with public utilities, the public has a big stake in SolarWinds’ safety and stability. The Northeast Blackout of 2003 showed that protection of electric reliability is too important to leave up to the utilities on their own, and the SolarWinds attacks show that protection of network integrity in federal agencies is too important to leave entirely up to providers like SolarWinds. There needs to be regulation to make sure SolarWinds and their ilk do all they can to protect that network integrity, just like there are regulations to make sure electric utilities do all they can to protect electric reliability (including cybersecurity).

As an economics major at the University of Chicago a fairly long (I’ll admit it!) time ago, I was fortunate enough to take two courses on microeconomics[ii] with Dr. Milton Friedman. While there was never a doubt that he was in favor of capitalism and free markets, he always made it clear that in some cases purely market mechanisms can’t adequately protect consumers against common threats. The main case in point at the time was air and water pollution, since the EPA had just been created by – believe it or not – Richard Nixon. Friedman pointed out that there was no way that individual consumers, or even individual companies, could use market power to protect themselves against most pollution threats; there needed to be regulation.

The same reasoning applies today.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] BTW, NotPetya caused $10 billion in losses, but have the Russians ever been approached about paying a penny of that in damages to any of their victims? Certainly not that I know of. This might be surprising, unless you consider the case of flight MH17. Not only have the Russians not paid a penny for that either, but I know of no concerted effort by any government to force them to pay anything at all. Once it became clear the Russians were responsible for this, I would have banned Russian aircraft from all international airspace until the government had paid all just claims – both from individuals and governments – from this attack, as well as billions in punitive damages to some agreed-upon international organization. Of course, that might have meant Putin couldn’t build his $1.5 billion palace with an indoor hockey rink, that Mr. Navalny has graciously revealed to the Russian people. But with an estimated net worth of at least $25 billion (and maybe much more), Putin could have paid the entire damages out of his own pocket and still had money on hand to assure a comfortable retirement – although in a just world his retirement will be spent in a government-paid cell deep in some prison somewhere.

[ii] Which at the time was called “price theory” at U of C. What other schools called macroeconomics was “monetary theory”. I don’t know what terms U of C uses now.

Monday, January 25, 2021

Russia on my mind


Last Thursday, I had to issue one of the first apologies I’ve made in the eight years I’ve been writing this blog. I did that because…well, I screwed up. You can read the post, but the upshot is that I mistakenly assumed that JetBrains – a very successful software company with a big following and a great reputation – had been used to breach some of their customers, including SolarWinds.

JetBrains was founded by three Russians and still has a large presence in Russia. Without going back to read the New York Times story about them – or the post I wrote based on that story – I wrote that their software had been used to attack their customers, when in fact the story had just wondered whether that might have happened; it didn’t say it had happened[i].

Here’s a thought exercise: If JetBrains had been founded by for example three Laotians, would I have been so quick to make that statement? I’ll admit it – I wouldn’t have. I’ve never thought of myself as being prejudiced against Russian companies, but clearly I must be. Of course, the Russian government – and various criminal groups they’re allied with – has done some pretty bad things to us and to other countries around the world, including in the cyber realm. But it’s a big leap from acknowledging that fact to saying that Russian tech companies shouldn’t be trusted, until some evidence appears that a particular Russian company is in fact not trustworthy.

I’ll be much more careful to avoid this mistake in the future. But I’m not actually writing this post to preach an uplifting moral message. Something else important happened last week that relates to this subject of unfairly targeting companies with ties to countries whose governments are untrustworthy: President Biden put the May 1 Executive Order on hold for 90 days. I sincerely hope he will send it to a well-deserved grave after that. And I also hope that DoE’s follow-on Order from December will also be sent to sleep with the fishes.

I don’t usually brag that I was right all along, but I’ll say it now: I was right all along. In a post the day after the EO was published, Kevin Perry and I made it quite clear: “…the order is a huge mistake. It will end up making the BPS much less secure, rather than the other way around.”

Four days later, when Kevin and I had time to think through the EO and what it meant, we put out another post entitled “What exactly is the goal of this Executive Order, anyway?” In that post, we pointed out that, of the 25-odd devices targeted by the EO, only three of them are operated by a microprocessor. Yet according to the EO, all of them are subject to a cyberattack. I pointed out that my $10 steam iron is just as subject to a cyberattack as those devices are – in other words, not at all (and this includes transformers, which were by far the biggest focus in press discussions of the EO, yet which are operated solely by the laws of physics – not by any microprocessor. The last time I checked, the Chinese haven’t yet figured out a way to bypass the laws of physics).

I wrote at least 8-10 other posts pointing out this and other problems with the EO. I once characterized it as “a non-solution to a non-problem”. I stand by that characterization.

However, last week I made the same mistake as the EO, although fortunately on a much smaller scale. I unconsciously assumed that any software that came from Russia is likely to carry malware or a backdoor meant to undermine American industry. The EO assumed that anything that came from, or was associated with, certain countries was by that fact alone likely to be dangerous.

Of course, the one country the EO was aimed at was China. Even though DoE later produced a list of five other countries that the EO would apply to – Iran, North Korea, Russia, Venezuela (!), and Cuba – none of those countries sells grid control systems to the US or is at all likely to do so in the foreseeable future. And, as Kevin and I pointed out in our second post linked above, the only system components that China even assembles (let alone sells) are motherboards for servers and workstations sold by Dell and HP. Since there’s no way the Chinese factories assembling those servers know whether they’re destined for an electric utility in California or a dry cleaner’s in Kansas, they simply can’t be the vector for a supply chain cyberattack on the US grid.

There’s an even bigger reason why the EO made no sense: It assumed that the Chinese government has every incentive in the world to want to launch a supply chain cyberattack that takes out a large portion of the US power grid, and that any patriotic Chinese company would be more than willing to help them accomplish this goal (or would at least not be able to resist the considerable pressure the government could bring to bear on them to cooperate).

But why would a Chinese company that has made great efforts to build up market share in the US throw all of that away by participating in a supply chain attack on the US grid? After such an attack – especially one that involves hardware (the EO mentioned nothing about software) – it would be ridiculously easy to find out what device led to the attack. It’s just about certain that any company found to have been a vector for the attack would be banned from selling anything in the US ever again, and probably in any Western country as well. In other words, it would probably be a death sentence for the company. Yet the EO assumes that any Chinese company would be easily persuaded to participate in a massive supply chain attack on the US grid.

And why would the Chinese government itself be dead set on launching a devastating supply chain attack on the US power grid? Again, unlike non-supply chain cyberattacks, where nowadays the entry point is usually a phishing email that could have come from anywhere, in a supply chain attack (again, especially a hardware one), there’s no question what government might be behind it: the government of the country of origin of the hardware, or perhaps the government of the country where the vendor is located.

Moreover, there’s almost no question that a big grid outage caused by a supply chain attack would be considered an act of war, leading to a military response. And once you have two nuclear powers start down the military path, even if it’s non-nuclear at first, it’s very possible that someone will make a mistake or get carried away, so that within an hour or two you’ll have lots of dead people in both countries, no matter which side ultimately declares “victory”. If you don’t believe this could happen, just ask the late Vasily Arkhipov, the Soviet naval officer whose single decision during the Cuban missile crisis in 1962 probably saved the world (or at least the USSR and the US) from total destruction.

So is it really likely the Chinese government would even entertain the thought of a massive supply chain attack on the US grid? Of course not.

And suppose we were to take action like what was contemplated in the EO (and will be realized to a more limited extent if DoE’s December supply chain order doesn’t get rescinded), and ban all grid hardware and/or software that “originates” in Russia or China (or both) – or is sold by a Russian or Chinese-owned or influenced company? Who would be the real losers in that event?

Of course, one class of losers would be the Russian or Chinese companies that would lose a lot of business due to this decision, in spite of their actual innocence. But the bigger class of losers would be US-based organizations that would normally use the software or hardware that was banned. After all, JetBrains didn’t get a huge market share among software developers by being just one among many options – they did it by offering an excellent product. Preventing US software developers from buying JetBrains because the company was “Russian influenced” would be just about the same as imposing a tax on those developers, equal to the presumably substantial difference in productivity between using JetBrains and using the next best competitor (as well as the transition costs, of course, which would also be considerable).

At this point, I would normally make fun of the people who wrote the EO and even entertained the idea that it was likely that China would be so foolish as to try to launch a supply chain attack on US infrastructure – if I hadn’t done the same thing myself about a week ago, when I wrote the post that included a statement that assumed Russia would be happy to do the same thing.

So I can’t say I’m more virtuous in this regard, but I will say I hope the idea that nation states are in themselves a serious threat for a supply chain attack caused by planting malware or a backdoor in a software or hardware product destined for the US gets buried along with the EO itself. Sure, it’s good to know where hardware products are made and who sells them (although the idea that software is “developed” anywhere in particular is just about meaningless nowadays) – since there are lots of considerations besides cybersecurity for which such information is important (e.g. you may be concerned about the legal environment of the country where the supplier is located, in case an issue arises that would require legal action).

But only a fool would base their judgment of a software or hardware product’s level of cybersecurity on where it came from, or  onthe nationality of the company that made it. Just ask the fools that wrote the EO. Or the fool that had to apologize to JetBrains.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] It doesn’t justify my mistake, but I think even writing the Times article was a mistake. After all, why even speculate about whether software from a Russian company is backdoored? Why not also speculate about whether all vodka coming from Russia is poisoned?