The New York Times this morning ran a (mostly) excellent article titled “As Understanding of Russian Hacking Grows, So Does Alarm”. It provides important new information about how the Russians were able to carry out the SolarWinds attacks, and also one great lesson that I recommend you incorporate into your supply chain cybersecurity risk management plan (and if you don’t have one of those, you should, even if you’re not required to have one by NERC CIP-013-1. If you’re not sure where to start, wait a month or two for my upcoming book, “Supply chain cybersecurity for critical infrastructure”).
The article starts out by saying:
On Election Day, General Paul M. Nakasone, the nation’s top
cyberwarrior, reported that the battle against Russian interference in the
presidential campaign had posted major successes and exposed the other side’s
online weapons, tools and tradecraft.
“We’ve broadened our operations and feel very good where
we’re at right now,” he told journalists.
Eight weeks later, General Nakasone and other American
officials responsible for cybersecurity are now consumed by what they missed
for at least nine months: a hacking, now believed to have affected upward of
250 federal agencies and businesses, that Russia aimed not at the election
system but at the rest of the United States government and many large American
corporations.
At this point, I need to issue a Spoiler Alert: What I call
the “military-cyber complex” doesn’t come out of this article looking too good.
It seems Uncle Vlad ran rings around them once again. And he did it because the
complex doesn’t seem to think that supply chain cybersecurity is sexy, compared
to say Project Einstein and the other Maginot Lines that they’ve stood up, at
great cost to the taxpayer. But Vlad knows why supply chain security is
important only too well.
Will they now wake up to this threat? They might, but if
they just try to throw millions or billions of dollars at different fancy
technological solutions (I can see it now: a $500 million “Supply chain
security cyber operations center” which will try to monitor every order for anything
having to do with critical infrastructure, as well as orders for each of the
components of each of those systems, etc.) they’re bound to fail spectacularly,
once again.
That’s the problem. When you give someone a blank check for
security, and when you believe that the more dollars the recipient spends, the
safer you are, you’ll inevitably get a boondoggle that won’t increase
security much at all. A crafty opponent like Uncle Vlad will just re-engineer their
tactics to get around whatever obstacles you try to put in their way, using your
press releases on these wonderful projects as blueprints for what they have to
avoid.
So how did Vlad evade Project Maginot – I’m sorry, I mean
Project Einstein? Pretty simple. According to the article, the systems that
conducted the Russian attacks were located in the US. Of course, the NSA doesn’t
have any authority to surveil domestic networks,[i] so they weren’t
able to see this activity.
And frankly, how would they have caught the activity if
they did stumble across it? As the article points out, “‘Early warning’ sensors
placed by Cyber Command and the National Security Agency deep inside foreign
networks to detect brewing attacks clearly failed. There is also no indication
yet that any human intelligence alerted the United States to the hacking.” The article
continues,
The government’s emphasis on election defense, while
critical in 2020, may have diverted resources and attention from long-brewing
problems like protecting the “supply chain” of software. In the
private sector, too, companies that were focused on election security, like
FireEye and Microsoft, are now revealing that they were breached as part of the
larger supply chain attack.
Notice the two most important words in this quote, which I’ve
bold-faced. Indeed, supply chain attacks are without much doubt the biggest
threat nowadays (with ransomware being number two. Even though it still exacts
a big toll worldwide every year, what needs to be done to combat it is well
understood – it’s just that a lot of organizations haven’t taken it seriously
yet). And software is light years ahead of hardware as a vehicle for supply
chain attacks.
So the Russians executed their attacks on SolarWinds
customers from inside the US. But they first had to penetrate the SolarWinds software
development network (which should have been protected like Fort Knox, just like
electric utilities have to provide extravagant protections to the OT networks
that actually run the power grid). How did they do that? As far as I’m
concerned, that’s one of the most important lessons that can be obtained from
the SolarWinds attacks.
Before I read this article, I had yet to hear any
suggestion about the answer to this question. However, the article points to a powerful
possible explanation: “SolarWinds moved much of its engineering to satellite
offices in the Czech Republic, Poland and Belarus, where engineers had broad
access to the Orion network management software that Russia’s agents
compromised.” Earlier, the article says that “Russian intelligence operatives
are deeply rooted” in those countries, which of course isn’t too surprising
given their histories and in the case of the governments of at least two of
those three countries, by their Russian sympathies.
So here’s an idea: Perhaps a company that has outsourced some
of its software development to Belarus shouldn’t be selling that software to
the NSA, DHS and DoD? Or for that matter to electric utilities? The article mentions
that under Kevin B. Thompson, the SolarWinds CEO for 11 years (and about to retire),
“every part of the business was examined for cost savings and common security
practices were eschewed because of their expense. His approach helped almost triple
SolarWinds’ annual profit margins to more than $453 million in 2019 from $152
million in 2010.” The article adds:
Even with its software installed throughout federal
networks, employees said SolarWinds tacked on security only in 2017, under
threat of penalty from a new European privacy law. Only then, employees say,
did SolarWinds hire its first chief information officer and install a vice
president of “security architecture.”
Yup, that sure worked out well. So how do we address the risk
that software companies will outsource their development to questionable
countries? As you probably know, the May 1 Executive Order instructs the Secretary
of Energy to develop a list of hardware products used on the power grid that
may be subject to “foreign-owned or corrupt influence” from six countries (this
list includes the usual suspects, but I was surprised to hear that Venezuela
and Cuba supply grid control systems, that could be subject to a cyberattack, to
US utilities. I don’t recall that I’ve heard of any Cuban EMS vendors, for
example).
But the EO only applies to hardware. It obviously wouldn’t
have prevented the SolarWinds attacks. Should incoming President Biden choose
as one of his first acts to expand the previous EO to software, while at the
same time adding the Czech Republic, Poland and Belarus to the FOCI countries?
That might sound like a great idea, until you start thinking
about how software is actually developed nowadays. It’s certainly not the picture
that people have (if they have any picture of software developers at all) of a
lonely genius sitting in his or her basement studio and producing a masterpiece
of code. Software nowadays is almost always a collaboration among many groups
sitting in many countries. Are we going to restrict it to companies, all of
whose developers live and work in the good ol’ USA? And how would we ever
police that? Plus this would eliminate companies like Siemens, who – as Matt
Wyckhouse of Finite State pointed out a couple months
ago - has “21 R&D hubs in China and over 5,000 R&D and engineering
staff there.”
Even more importantly, just about any software product you
buy includes lots of components that aren’t written by the company you buy the software
from, but are acquired from third parties; one estimate is that the average product
has 135 components. 90% of these components are open source, meaning they’re
developed by unpaid teams from all over, that collaborate to build these products
and make them available for free. Good luck trying to police all of those
people, or even to find out what country they live in.[ii]
The best way to address risks like these is for
organizations that buy software (i.e. just about every organization on the
planet) to assess their suppliers for supply chain cybersecurity risks that
might apply to them, and then determine for themselves how these should be
mitigated.[iii] For example, the
outcome might have been different if SolarWinds had received a bunch of questionnaires
from their customers that asked a question like “Have you outsourced any of
your software development to other countries than the US or Canada? If so,
please name the countries and the reason for this outsourcing.”
A question like this
wouldn’t have in itself forced any software company to stop outsourcing any of
their software development resources to other countries. But it would have at
least given their customers (including federal agencies like DoD and DHS) the
opportunity to decide, if they didn’t like the answers they received, whether or
not to continue buying this software - or perhaps whether to remove it from their
networks (as federal agencies may still have to do for SolarWinds). And just
one or two big customers bringing this issue up to SolarWinds might easily have
made their management decide that the likely financial losses due to losing
large customers would far outweigh the pocket change they were saving by
outsourcing some of their development to Belarus.
So if the feds are serious about addressing supply chain cybersecurity
risks, this is the sort of work they need to be doing – assessing risks and mitigating
them, on a supplier-by-supplier basis. Of course, these projects aren’t
particular thrilling, and they aren’t hugely expensive. But remember, Mr. Fed
(or whoever makes these decisions), something doesn’t have to be hugely
expensive to be worth doing.
Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.
[i] And don’t even tell me that we ought to allow NSA to have that authority. If we do that, why don’t we just declare Putin our next president and skip all this nonsense about self-government and federalism? Lord knows these ideas are under enough attack from domestic sources nowadays.
[ii] On the other hand, there’s lots of information you can learn about vulnerabilities that affect components, both proprietary and open source. But there’s no way to get that information without having a software bill of materials. And I want to confirm that the introductory webinar on the SBoM proof of concept for the power industry, under the auspices of the National Technology and Information Administration of the Dept. of Commerce, will actually happen in January, after it had to be postponed for technical reasons the week before Christmas. Stay tuned to this blog for the announcement when it’s rescheduled.
[iii] Of course, this is exactly what CIP-013-1 R1.1 requires.
No comments:
Post a Comment