Monday, December 28, 2020

Where did the SolarWinds attack start?


I’ve been thinking of the SolarWinds attack as a “classic” (if you can call anything classic that is such a recent concept) supply chain cyberattack, i.e.

1.      SolarWinds was somehow penetrated by the Russians;

2.      The Russians worked their way into the development network (which they shouldn’t have been able to do, of course, any more than they should be able to get into your electronic security perimeter);

3.      There, they loaded their malware onto a patch, which was of course immediately applied by “only” 18,000 of their customers; and

4.      The rest is history.

However, I realized as I was writing yesterday’s post that there’s a step before this – or it seems to be. I did this by putting together two facts:

1.      Soon after the attack became public, SolarWinds said they had been compromised through their hosted Office365 software.

2.      As I discussed in yesterday’s post and Saturday’s, Thursday articles in the Washington Post and New York Times said that about 40 customers of an Office365 access reseller were told last week that their credentials for access to Microsoft Azure had been stolen. One of those customers was CrowdStrike. Someone tried to get into Azure using their credentials, stolen from the reseller.

In best Sherlock Holmes fashion, I surmised from these two facts that SolarWinds might have been one of those unlucky 40 customers of the Office365 reseller. The attacker might have succeeded in getting into their Office365 instance on Azure (the CrowdStrike attacker was only caught because they tried to access the Office365 Outlook module, which CrowdStrike didn’t use), and from there found a way to penetrate the on prem SolarWinds network (of course, if you’re able to root around in all of the Office documents created by a company, as well as probably their email accounts, it’s likely it won’t take a pro too long to find a way to penetrate that company’s network as well. And these guys are pros!).

If this is correct – and it’s just a surmise, of course – this means the SolarWinds attack didn’t start with SolarWinds being penetrated. It started with the Office365 reseller being penetrated. Which means it was another two-level supply chain attack, like I described in my post yesterday.

So who’s to blame for all of this? Of course, SolarWinds still bears the majority of the blame, since they seem to have lacked some basic security controls that would have prevented the Russians from so easily penetrating their development network, as well as any ability to detect them once they were inside the network.

However – again, if my theory is correct – the Office365 reseller bears some blame, but I also put a good deal of blame on Microsoft. I honestly don’t know what kind of controls they put on their resellers, but clearly they aren’t enough. Here are two paragraphs that I added to yesterday’s post this morning (anyone who read the post in the email would have missed these, since the email went out last night), as well as the two paragraphs leading up to them. They give you an idea of what I’m thinking – still off the top of my head, of course – should happen:

But you can’t blame the Office365 reseller entirely for the fact that CrowdStrike and 40 of their other customers were compromised. In fact, I put most of the blame for this attack on Microsoft. What kind of vetting were they doing of these resellers? They need to make sure they follow certain basic security practices.

And more importantly, they need to make sure both their customers and their resellers understand that securing cloud- based networks isn’t the same thing as securing on-premises networks. This was, in my opinion, the big problem revealed by the Capital One breach, which turned out to be only one of close to 30 breaches of AWS customers by Paige Thompson, a former AWS employee. She had bragged online that she breached all of these customers through one particular AWS service – metadata - that the customers were charged with configuring, but which she said none of them understood.

I stated in a post in August 2019 that Amazon should offer free training to all customers, to make sure they understand what they need to do to secure their AWS environment. I'm told both AWS and Azure do that now, and probably did it before I wrote that post. But it's obviously not enough. I think the cloud providers should be required to identify say the top 20 security mistakes that customers and resellers make, and do external testing to determine whether any customer or reseller has made any of those mistakes. If they have, they should work with the customer/reseller to fix the problem - and if that party doesn't want to be helped, their access to the cloud should be discontinued.

Of course, the cloud providers also need to require resellers to follow more general security practices, like employee background checks and strong authentication. They’re presumably doing a lot of that already, but clearly what they’re doing now isn’t enough.[i] They should have the right to audit the reseller (of course, they can put that in their contract) and to exercise it whenever they suspect the reseller is falling down on security.

Again, there needs to be regulation to oversee this. I realize now that Microsoft, Amazon, SolarWinds, Facebook, Google, and probably a number of other companies – are really operators of critical infrastructure. The entire country has a stake in that infrastructure being run securely. All of these companies have demonstrated that – while I don’t attribute bad motives to any of them – they can use a little nudge to do the right thing.

Of course, if “the right thing” were clearly evident now, and had been clearly evident for a while, we wouldn’t be in the mess we’re in today. We’ve all been looking through a glass darkly in this matter, but maybe we can do better than that soon.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] And the courts don’t necessarily seem inclined to buy arguments from the cloud providers that their customers are responsible for breaches. The WaPo article said “When hackers stole more than 100 million credit card applications last year from a major bank’s cloud, which was provided by Amazon Web Services, customers sued the bank and AWS. In September, a federal judge denied Amazon’s motion to dismiss, saying its ‘negligent conduct’ probably ‘made the attack possible.’” In fact, this bank sounds a lot like Capital One.

No comments:

Post a Comment