Sunday, January 31, 2021

We need to regulate these guys – SolarWinds edition


There’s lots of good information coming out about the SolarWinds attacks, as well as others that were very likely carried out by the Unnamed-threat-actor-that-sure-sounds-a-lot-like-it’s-Russia, whoever that may be. All of this, along with information about other attacks (not all by Russia) that was made public long ago, points to the need to start regulating a bunch of tech companies that I hadn’t thought of as being part of the country’s critical infrastructure – until SolarWinds caused the scales to fall from my eyes. I’ll deal with SolarWinds in this post, and the other companies in a post in the near future (although tomorrow's topic is going to be different).

There’s been a lot of discussion of the SUNBURST malware, which was spread by SolarWinds updates and provided the backdoor that the Russians exploited to penetrate a lot of important networks, especially US government networks. Like most malware, SUNBURST used the “pray and spray” method of cyberattacks: a) attempt to get the malware into as many organizations as possible (which ended up being “only” 18,000, as SolarWinds said in their memorable SEC filing the day after the attack was announced); b) wait until the phone home beacons start appearing; c) choose the juiciest targets (hint: the National Nuclear Safety Administration is juicier than Joe’s Heating and Cooling Supply); and d) get to work reaping the fruits of their labors – which turned out to be abundant in this case, of course.

In the SolarWinds attacks, the benefits the Russians were looking for seem to have been limited to national security espionage. The Russians could easily have followed their normal pattern of creating as much chaos and destruction as possible – as in the 2016 US elections and especially NotPetya, which was a supply chain attack aimed at all businesses in the Ukraine. NotPetya had no goal other than bricking as many machines as possible as quickly as possible. It succeeded wildly not only in the Ukraine but even more so in other countries like the US and Denmark, where it almost sank (pun intended) Maersk, the largest ocean shipping company in the world.[i]

Fortunately, the Russians – perhaps out of some sense of pity for the American people, or more likely because they suspected a new sheriff might be coming to town, who would be much less forgiving of little foibles like SolarWinds or paying a bounty to kill American soldiers in Afghanistan, than was the previous sheriff – decided not to take the destruction route. Thank God for small favors, as my former boss used to say.

But there has been very little discussion of the SUNSPOT malware, which the Russians wrote to attack only one company: SolarWinds. And this malware wasn’t designed to do anything like what SUNBURST did: provide a backdoor so the Russians could exfiltrate lots of data from SolarWinds. It was designed to penetrate SolarWinds’ software build process and plant SUNBURST in software updates, so that SolarWinds would then kindly distribute them to their customers. And that attack succeeded brilliantly.

In fact, I nominate SUNSPOT as the second-most-sophisticated piece of malware ever written – after Stuxnet, which was created by the US and Israel to attack Iran’s uranium enrichment program, and also succeeded brilliantly (and it might even be ahead of Stuxnet. I’m certainly not in a position to judge between the two).

You can read the full story of SUNSPOT in this article by CrowdStrike, which worked with SolarWinds to analyze the malware and the attack. But here are some highlights, which relate to the subject of this post:

1.     The Russians first penetrated the SolarWinds software development environment in September 2019 – i.e. 15 months before SolarWinds or anyone else knew there was a problem.

2.      Like any good organization about to make a risky investment, the Russians first executed a test of a “beta” version of SUNSPOT. They wanted to see if they could in principle place a piece of software code in the SolarWinds Orion platform, so that it would be included in subsequent updates shipped to SolarWinds customers. They did this using a completely benign piece of code and the gambit worked perfectly. Just like later on with SUNBURST, SolarWinds had no clue this was happening. In fact, as with SUNBURST, this test code was inserted in multiple releases of the Orion platform, not just one.

3.     With the concept proven, the Russians started to build the final version of SUNSPOT, probably testing it in their own environment. About four months later in February, the Russians compiled and deployed this malware in the SolarWinds software build environment.

4.    The most important feature of SUNSPOT was that, just like Stuxnet, it was built assuming there was no possibility for any real-time intervention by the Russians in the software build process. The software had to act completely autonomously; yet it had to quickly adapt to any change in the environment. This was even harder than it sounds since, while SolarWinds wasn’t specifically looking for signs of an intruder inside the build process, the slightest change in the environment – such as a hash value for one of the Russians’ files (which were always named with names that matched files used in the build process itself) not matching exactly the value for the same file the previous day - would have set off alarms just in the normal build process. This would likely have led to SUNSPOT being discovered and the end of this whole affair (and we’d all have written about how astute SolarWinds was in heading off at the pass what could have been a series of devastating attacks on important government agencies).

5.     I won’t go into all of the brilliant features in SUNSPOT, but I’ll mention one: SUNSPOT was designed to surmount what may have been the biggest problem the Russians faced at SolarWinds: The software build process for the next Orion release went on for months, but neither the Russians nor SolarWinds had any idea exactly how long it would take. Moreover, the process was shut down and restarted every morning, and there was no way to know in the morning whether today might be the day that everything in the new build worked perfectly. If that happened, the SolarWinds engineers would probably decide to release the code as it stood at the end of that day. This meant that SUNSPOT needed to wipe away all traces of its activity every evening, once it became clear that the build process was about to shut down for the night without having produced the final product, since the slightest discrepancy – for example in the size of a file – would have alerted SolarWinds that something was wrong.

6.     Yet the next morning, SUNSPOT had to recreate everything that had been wiped away the previous evening, to prepare for the possibility that today would be the day that the code was declared ready to ship. This whole process is described in great detail in the CrowdStrike article. The important point is that all of this had to be done by SUNSPOT alone, without any prompting or guidance from the Russians - who had no real time visibility at all into the software build process. Just as the Americans and Israelis who created Stuxnet couldn't control it in any way once it was inside the Natanz uranium enrichment plant, which was completely air-gapped from the rest of the world (well, not quite. It turns out contractors were allowed to bring USB sticks in to aid their work, and that's how Stuxnet got in in the first place. It was also a classic supply chain attack, although different from SolarWinds).

7.      Despite these challenges, when SUNSPOT was deployed last February it worked perfectly. The SUNBURST malware was planted in about seven releases of the Orion platform, before the Russians decided to remove it from the build environment and cover all of their traces last June. I guess they figured that, with 18,000 infected targets, they just didn’t have enough good cyber resources available to exploit all of them, let alone any new ones that might be added. An embarrassment of riches, it seems.

What’s the lesson of all this? SolarWinds really dropped the ball. Sure, this was an unprecedented attack that nobody saw coming. But it’s impossible to believe there was no way they could have detected the Russians in their network during the 15 months between when they first entered and when SolarWinds learned about it. Unforunately, SolarWinds only learned they had been penetrated when the rest of the world learned it: after FireEye discovered they’d been compromised through SolarWinds last month.

It’s also impossible to believe there’s no way SolarWinds could have learned that their development process was compromised, when there were about 5-6 months during which one or the other of the two pieces of Russian code (the benign test code and SUNBURST) was being inserted into every Orion update that left their shop.

Do I know how SolarWinds could have detected – and presumably stopped, since that would have been very easy had their presence been known – the Russians? No I don’t. But there’s one thing that I do know (although only with hindsight, I’ll readily admit): SolarWinds, in accepting license fees from huge companies and important government agencies like DoE, DHS and NSA, should have been paying a lot more attention to cybersecurity than they did. They were clearly fat, dumb and happy and on top of the world – until they weren’t.

Companies like SolarWinds are really critical infrastructure organizations (again, I say this with hindsight. I wouldn’t have said it at all two months ago). The entire public has a stake in their safe, successful operation; that stake goes well beyond the license fees that SolarWinds earns. SolarWinds can’t hide behind the “Well, if you don’t like our product, nobody’s forcing you to buy it” rationale. Just as with public utilities, the public has a big stake in SolarWinds’ safety and stability. The Northeast Blackout of 2003 showed that protection of electric reliability is too important to leave up to the utilities on their own, and the SolarWinds attacks show that protection of network integrity in federal agencies is too important to leave entirely up to providers like SolarWinds. There needs to be regulation to make sure SolarWinds and their ilk do all they can to protect that network integrity, just like there are regulations to make sure electric utilities do all they can to protect electric reliability (including cybersecurity).

As an economics major at the University of Chicago a fairly long (I’ll admit it!) time ago, I was fortunate enough to take two courses on microeconomics[ii] with Dr. Milton Friedman. While there was never a doubt that he was in favor of capitalism and free markets, he always made it clear that in some cases purely market mechanisms can’t adequately protect consumers against common threats. The main case in point at the time was air and water pollution, since the EPA had just been created by – believe it or not – Richard Nixon. Friedman pointed out that there was no way that individual consumers, or even individual companies, could use market power to protect themselves against most pollution threats; there needed to be regulation.

The same reasoning applies today.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] BTW, NotPetya caused $10 billion in losses, but have the Russians ever been approached about paying a penny of that in damages to any of their victims? Certainly not that I know of. This might be surprising, unless you consider the case of flight MH17. Not only have the Russians not paid a penny for that either, but I know of no concerted effort by any government to force them to pay anything at all. Once it became clear the Russians were responsible for this, I would have banned Russian aircraft from all international airspace until the government had paid all just claims – both from individuals and governments – from this attack, as well as billions in punitive damages to some agreed-upon international organization. Of course, that might have meant Putin couldn’t build his $1.5 billion palace with an indoor hockey rink, that Mr. Navalny has graciously revealed to the Russian people. But with an estimated net worth of at least $25 billion (and maybe much more), Putin could have paid the entire damages out of his own pocket and still had money on hand to assure a comfortable retirement – although in a just world his retirement will be spent in a government-paid cell deep in some prison somewhere.

[ii] Which at the time was called “price theory” at U of C. What other schools called macroeconomics was “monetary theory”. I don’t know what terms U of C uses now.

Thursday, January 28, 2021

SBoM and DNS

The webinar (actually two webinars with substantially the same content) this Tuesday, to introduce use of software bills of materials (SBoMs) to the energy industry, was quite successful. We had good turnout for both webinars, and none of the speakers made a fool of themself. The speakers were:

·        Dr. Allan Friedman, the leader (fearless, to be sure) of the Software Transparency Initiative (STI) sponsored by the National Technology and Information Administration (NTIA) of the US Department of Commerce, provided a good overall introduction to SBoMs (which I believe was number 7,452 for him) and to the upcoming SBoM Proof of Concept for the electric power industry.

·        Ginger Wright and Andy Bochman of Idaho National Labs discussed their strong interest in seeing SBoMs become widely produced and widely used in the industry, and especially how widely available SBoMs will greatly facilitate the work that INL is conducting on the CyTRICS project.

·        Tom Alrich, who has been participating in the STI meetings and volunteered to help Allan organize the energy PoC, discussed what he considers to be the three most important use cases for SBoMs in the power industry (the subject of a future post), as well as another topic (subject of this post).

·        Jim Jacobson of Siemens Healthineers described the proofs of concept for the healthcare industry that started in 2018 and are still ongoing (and of which he is the co-leader), in ever-more-ambitious iterations.

·        Charlie Hart of Hitachi Automotive described why that industry, in this case led by the Autos-ISAC, has developed such a keen interest in SBoMs (due, truth be told, in no small part to Charlie’s own efforts!), with their own Proof of Concept (PoC) due to start in the near future.

The NTIA will sponsor further informational webinars for the industry, including 1-2 in February. I will publish those notices in my blog, but if you’re interested in receiving all notices directly, you should email Allan at afriedman@ntia.doc.gov so he can put you on the mailing list.

My second topic on Tuesday had to do with NTIA, what it does and most importantly what it doesn’t do. It doesn’t write or enforce regulations, standards, guidelines, Papal encyclicals, fatwas or anything like that. What it does do is find ways to help private industry overcome barriers that are impeding adoption of important new technologies.

Allan described the problem with SBoMs – which led NTIA to start the STI in 2018 – as a chicken-and-egg problem: Software suppliers aren’t producing SBoMs because their customers aren’t asking for them, but their customers aren’t asking for them because they know they’re not currently available. The solution to this problem is to get a small number of software suppliers and software users in a particular industry together to work out – in an antitrust-friendly, NDA-protected manner – processes for both producing and “consuming” SBoMs. Hence the PoCs.

But this isn’t NTIA’s first rodeo – in fact, they have been doing this sort of work for decades. They have a had a number of big successes (although they’re very modest – no word of them on their web site!), but the one both you and I can appreciate most is DNS. NTIA didn’t invent the idea of DNS, but they did put in place the processes need to administer it.

Although NTIA itself ran DNS in the early days, the goal was always to turn its administration over to the private sector. NTIA did this in the 1990s, when it outsourced administration of DNS to the Internet Assigned Numbers Authority (IANA), which runs it to this day (and currently has a budget of around $100 million).

Did IANA have to compel internet content providers and users to use DNS? No. To this day there are no regulations (that I know of, anyway) that compel anyone to use DNS. And if you really don’t want to use DNS – say, because your religion forbids it – you don’t have to. After all, every website has an IP v6 address like 2001:0db8:85a3:0000:0000:8a2e:0370:7334. As long as you know the address of the site you want to go to – and you can enter it without making a mistake – you will still be able to visit all of your favorite sites.

Similarly, if you want to get people to use your site without using DNS, you just have to give every user – and every potential new user, since the main purpose of most websites is to interest a larger audience in whatever the site provides – your address. And they can live a DNS-free life as well!

Of course, if you want to maintain the million-or-so daily users of your site (which happens to be the approximate number of daily readers of this blog, give or take a million or so), you’ll need to provide this address to each of those people. But there’s one catch: You can’t email it to them, since email uses DNS! You’ll have to call every one of them. A small price to pay to avoid using DNS, to be sure.

I guess you get the idea: anyone who uses the internet uses DNS constantly. Yet nobody is compelled to use it. What it took was someone to help the machine overcome the initial inertia and start moving on its own. That someone was NTIA. Of course, SBoMs will never be a concern of the average person; SBoMs won’t end hunger, stop global warming, or solve the fusion problem.

But if NTIA (and all the others working toward this end, since many other groups worldwide are promoting the concept) has their way, they will become almost part of the security landscape in say 5-10 years. And security professionals will wonder how they could have ever lived without SBoMs in the past, just as most of us can’t remember[i] a world without the internet and DNS.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I remember the world without the internet and DNS quite well. What I can’t remember is how I could ever have been happy, knowing what I had to go through then to get information or to communicate with people far away in anything close to real time, for anything less than a king’s ransom.

Monday, January 25, 2021

Russia on my mind


Last Thursday, I had to issue one of the first apologies I’ve made in the eight years I’ve been writing this blog. I did that because…well, I screwed up. You can read the post, but the upshot is that I mistakenly assumed that JetBrains – a very successful software company with a big following and a great reputation – had been used to breach some of their customers, including SolarWinds.

JetBrains was founded by three Russians and still has a large presence in Russia. Without going back to read the New York Times story about them – or the post I wrote based on that story – I wrote that their software had been used to attack their customers, when in fact the story had just wondered whether that might have happened; it didn’t say it had happened[i].

Here’s a thought exercise: If JetBrains had been founded by for example three Laotians, would I have been so quick to make that statement? I’ll admit it – I wouldn’t have. I’ve never thought of myself as being prejudiced against Russian companies, but clearly I must be. Of course, the Russian government – and various criminal groups they’re allied with – has done some pretty bad things to us and to other countries around the world, including in the cyber realm. But it’s a big leap from acknowledging that fact to saying that Russian tech companies shouldn’t be trusted, until some evidence appears that a particular Russian company is in fact not trustworthy.

I’ll be much more careful to avoid this mistake in the future. But I’m not actually writing this post to preach an uplifting moral message. Something else important happened last week that relates to this subject of unfairly targeting companies with ties to countries whose governments are untrustworthy: President Biden put the May 1 Executive Order on hold for 90 days. I sincerely hope he will send it to a well-deserved grave after that. And I also hope that DoE’s follow-on Order from December will also be sent to sleep with the fishes.

I don’t usually brag that I was right all along, but I’ll say it now: I was right all along. In a post the day after the EO was published, Kevin Perry and I made it quite clear: “…the order is a huge mistake. It will end up making the BPS much less secure, rather than the other way around.”

Four days later, when Kevin and I had time to think through the EO and what it meant, we put out another post entitled “What exactly is the goal of this Executive Order, anyway?” In that post, we pointed out that, of the 25-odd devices targeted by the EO, only three of them are operated by a microprocessor. Yet according to the EO, all of them are subject to a cyberattack. I pointed out that my $10 steam iron is just as subject to a cyberattack as those devices are – in other words, not at all (and this includes transformers, which were by far the biggest focus in press discussions of the EO, yet which are operated solely by the laws of physics – not by any microprocessor. The last time I checked, the Chinese haven’t yet figured out a way to bypass the laws of physics).

I wrote at least 8-10 other posts pointing out this and other problems with the EO. I once characterized it as “a non-solution to a non-problem”. I stand by that characterization.

However, last week I made the same mistake as the EO, although fortunately on a much smaller scale. I unconsciously assumed that any software that came from Russia is likely to carry malware or a backdoor meant to undermine American industry. The EO assumed that anything that came from, or was associated with, certain countries was by that fact alone likely to be dangerous.

Of course, the one country the EO was aimed at was China. Even though DoE later produced a list of five other countries that the EO would apply to – Iran, North Korea, Russia, Venezuela (!), and Cuba – none of those countries sells grid control systems to the US or is at all likely to do so in the foreseeable future. And, as Kevin and I pointed out in our second post linked above, the only system components that China even assembles (let alone sells) are motherboards for servers and workstations sold by Dell and HP. Since there’s no way the Chinese factories assembling those servers know whether they’re destined for an electric utility in California or a dry cleaner’s in Kansas, they simply can’t be the vector for a supply chain cyberattack on the US grid.

There’s an even bigger reason why the EO made no sense: It assumed that the Chinese government has every incentive in the world to want to launch a supply chain cyberattack that takes out a large portion of the US power grid, and that any patriotic Chinese company would be more than willing to help them accomplish this goal (or would at least not be able to resist the considerable pressure the government could bring to bear on them to cooperate).

But why would a Chinese company that has made great efforts to build up market share in the US throw all of that away by participating in a supply chain attack on the US grid? After such an attack – especially one that involves hardware (the EO mentioned nothing about software) – it would be ridiculously easy to find out what device led to the attack. It’s just about certain that any company found to have been a vector for the attack would be banned from selling anything in the US ever again, and probably in any Western country as well. In other words, it would probably be a death sentence for the company. Yet the EO assumes that any Chinese company would be easily persuaded to participate in a massive supply chain attack on the US grid.

And why would the Chinese government itself be dead set on launching a devastating supply chain attack on the US power grid? Again, unlike non-supply chain cyberattacks, where nowadays the entry point is usually a phishing email that could have come from anywhere, in a supply chain attack (again, especially a hardware one), there’s no question what government might be behind it: the government of the country of origin of the hardware, or perhaps the government of the country where the vendor is located.

Moreover, there’s almost no question that a big grid outage caused by a supply chain attack would be considered an act of war, leading to a military response. And once you have two nuclear powers start down the military path, even if it’s non-nuclear at first, it’s very possible that someone will make a mistake or get carried away, so that within an hour or two you’ll have lots of dead people in both countries, no matter which side ultimately declares “victory”. If you don’t believe this could happen, just ask the late Vasily Arkhipov, the Soviet naval officer whose single decision during the Cuban missile crisis in 1962 probably saved the world (or at least the USSR and the US) from total destruction.

So is it really likely the Chinese government would even entertain the thought of a massive supply chain attack on the US grid? Of course not.

And suppose we were to take action like what was contemplated in the EO (and will be realized to a more limited extent if DoE’s December supply chain order doesn’t get rescinded), and ban all grid hardware and/or software that “originates” in Russia or China (or both) – or is sold by a Russian or Chinese-owned or influenced company? Who would be the real losers in that event?

Of course, one class of losers would be the Russian or Chinese companies that would lose a lot of business due to this decision, in spite of their actual innocence. But the bigger class of losers would be US-based organizations that would normally use the software or hardware that was banned. After all, JetBrains didn’t get a huge market share among software developers by being just one among many options – they did it by offering an excellent product. Preventing US software developers from buying JetBrains because the company was “Russian influenced” would be just about the same as imposing a tax on those developers, equal to the presumably substantial difference in productivity between using JetBrains and using the next best competitor (as well as the transition costs, of course, which would also be considerable).

At this point, I would normally make fun of the people who wrote the EO and even entertained the idea that it was likely that China would be so foolish as to try to launch a supply chain attack on US infrastructure – if I hadn’t done the same thing myself about a week ago, when I wrote the post that included a statement that assumed Russia would be happy to do the same thing.

So I can’t say I’m more virtuous in this regard, but I will say I hope the idea that nation states are in themselves a serious threat for a supply chain attack caused by planting malware or a backdoor in a software or hardware product destined for the US gets buried along with the EO itself. Sure, it’s good to know where hardware products are made and who sells them (although the idea that software is “developed” anywhere in particular is just about meaningless nowadays) – since there are lots of considerations besides cybersecurity for which such information is important (e.g. you may be concerned about the legal environment of the country where the supplier is located, in case an issue arises that would require legal action).

But only a fool would base their judgment of a software or hardware product’s level of cybersecurity on where it came from, or  onthe nationality of the company that made it. Just ask the fools that wrote the EO. Or the fool that had to apologize to JetBrains.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] It doesn’t justify my mistake, but I think even writing the Times article was a mistake. After all, why even speculate about whether software from a Russian company is backdoored? Why not also speculate about whether all vodka coming from Russia is poisoned?

Sunday, January 24, 2021

A reminder: SBoM webinar for Energy on Tuesday!

This is a reminder that the National Technology and Information Administration of the US Department of Commerce will present a webinar on software bills of materials on Tuesday (at 9AM and 4PM ET) for the electric power industry. You can learn more about the webinar, including the connection information, here. Note registration is not required.

The only information I have to add to the previous post is the organizations that will be speaking: NTIA (Dr. Allan Friedman, the leader of the Software Transparency Initiative), Idaho National Labs, Siemens, Hitachi and Tom Alrich LLC. I hope you can attend!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, January 21, 2021

An apology to JetBrains

On Monday, I put up a post entitled “What could have prevented the SolarWinds attacks?” It contained the following paragraph:

…the Russians could have penetrated a software development tool (presumably by planting malware in the tool developer’s network, which would have played the same role that SUNSPOT did with SolarWinds). Then, if SolarWinds used that tool, the Russians wouldn’t have to penetrate SolarWinds’ development network - they would have already been there! This might be the ultimate supply chain attack, for reasons described in this post. Of course, it was recently learned that the Russians did penetrate a very widely-used development tool called JetBrains. And one of JetBrains’ customers was in fact SolarWinds.

The post I linked described a New York Times article on JetBrains, that was entitled “Widely Used Software Company May Be Entry Point for Huge U.S. Hacking”. The second paragraph of the article read:

Officials are investigating whether the company, founded by three Russian engineers in the Czech Republic with research labs in Russia, was breached and used as a pathway for hackers to insert back doors into the software of an untold number of technology companies. Security experts warn that the monthslong intrusion could be the biggest breach of United States networks in history.

The article also stated:

By compromising TeamCity, or exploiting gaps in how customers use the tool, cybersecurity experts say the Russian hackers could have inconspicuously planted back doors in an untold number of JetBrains’ clients. 

and ended with this statement:

“It can allow an adversary to have thousands of SolarWinds-style back doors in all sorts of products in use by victims all over the world.,” Mr. Alperovitch added. “This is a very big deal.”

You will notice that the Times article, while clearly expressing a lot of alarm about the possibility that JetBrains might have been compromised (because of its widespread use in software development, including by SolarWinds, and – frankly – because of its ties to Russia), avoided saying that this had actually happened. And the post I wrote about the article on January 6 walked that same fine line.

Unfortunately, the statement in my post on Monday went beyond what both the article and my previous post had said. It stated affirmatively that JetBrains had been compromised. Did I say this because I’d learned some important new information since the previous post? No, I said it because – truth be told – I sometimes think that, just because I wrote something, this means my memory of what I wrote should be perfect. In other words, I linked to my previous post without bothering to read it to make sure I knew what it said.

This morning, I received an email from Yury Molodtsov, a representative of JetBrains, pointing out – quite nicely, I will say – my error. He also provided a link to this statement from JetBrains, in response to the Times article. It stated “First and foremost, JetBrains has not taken part or been involved in this attack in any way”, and also “SolarWinds has not contacted us with any details regarding the breach and the only information we have is what has been made publicly available.” The following day, JetBrains posted another statement pointing out that SolarWinds had said “The Company hasn’t seen any evidence linking the security incident to a compromise of the TeamCity product” (TeamCity is the JetBrains product that SolarWinds uses, as well as a huge number of other software developers).

So I owe a big apology to JetBrains. I just hope they’ll continue to produce such a great product, and they’ll continue to keep it as secure as they can.

As for myself, I’m going to be a lot more circumspect about quoting news articles, and I’ll make sure I’m not saying anything more than the article I’m quoting does.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Tuesday, January 19, 2021

What could have prevented the SolarWinds attacks?...the sequel


Yesterday’s post seems to have stirred a lot of interest. Two people I’ve known for a long time, who spent many years on the front lines of the fight to secure the power grid from cyberattacks but who are both now “retired” (a word that has more or less lost its meaning, for a lot of us who might otherwise be considered to be of retirement age), wrote in to comment on the post. I’ll discuss the comments of one of them in this post, and of the other in (hopefully) tomorrow’s post.

The first of those people is newly retired. He’s Jim Ball, former CISO of the Western Area Power Authority (WAPA). He makes some excellent points:

1.      He noted that, while earlier versions of the SNMP protocol had security issues, the current SNMP v3 is considered fairly secure. Of course, devices that are still running an earlier version (and how many organizations update all their UPS’s, building control systems and power distribution units whenever there’s a new version?) will still be vulnerable.

2.      He points out that while there is speculation that SNMP played a part in the disconnection of the Ukrainian UPS that was manipulated during the 2015 Russian attacks on the Ukrainian grid, the main point is that the Russians were able to compromise it because they “completely owned the utility's networks and were operating as trusted insiders- no need for fancy tricks when you have that level of access…The disconnection of the UPS was a supporting event to the main attack, meant to confuse and impede response efforts.”

3.      In discussing the fact that the Sunburst malware (which the Russians pushed out inside SolarWinds Orion updates that they had poisoned) was likely present in most compromised networks for many months, I opined in the post “I would just say that in general, network and server monitoring must have really fallen down – or just not been in place in the first place – in many of the SolarWinds customers who were…attacked…”

4.      Jim pointed out that it would have been extremely difficult for any organization to discover that they were under attack by the Russians because these were insider attacks. The Sunburst malware (that they had planted during development of about six or seven Orion updates) let the Russians operate from “inside” SolarWinds, so any Russian activity would almost inevitably have been dismissed as SolarWinds doing its normal job. After all, network management software is constantly reaching out to network control devices like firewalls and switches – it does no good if it doesn’t do that.

5.      “The hijacking of the SolarWinds accounts was probably just the first stage to lateral movement. A skilled attacker would then harvest credentials from a more plausible identity such as a mail system service account, an endpoint detection and response account, or even an actual admin user.”

6.      Jim says that, in order to detect the Russians after they penetrated your organization through SolarWinds, “You'd have to implement an extraordinary level of scrutiny on account activity.  It’s possible, but it calls for a lot of skill and experience in your cyber engineers and SOC analysts.  That creates its own challenges as it drives costs way up. Those people are hard to find and expensive to acquire and keep.”

7.      I pointed out to Jim that, if organizations are going to protect themselves from compromise due to the SolarWinds attacks and similar attacks that may occur in the future (of course, a lot of SolarWinds attacks are probably still in the future. Since 18,000 customers downloaded, and presumably installed, one of the compromised SolarWinds updates, the Russians might have just planted malware in the great majority of them, so they could come back to it when they have the time. The latest guess I’ve seen for the number of organizations actually attacked by the Russians was only around 250), they need to be able to identify traffic from those attacks. How could they do that? Jim stated that “user behavior analytics” tools exist, “but they require a lot of tuning to work correctly.  You’re looking for unusual activity on the part of existing authorized accounts.  (They require) a lot of smarts just to define that properly.”

8.      I noted in yesterday’s post that the first organization that discovered the SolarWinds attacks was FireEye. Did they run some sophisticated behavioral analytics software that immediately recognized the Russian activity when it happened? They may have run it, but if so it didn’t discover the breach. For one thing, the Russians were probably in their network since at least June, because that was when the last tainted update went out; yet FireEye didn’t discover them until December. And more importantly, FireEye’s big break (in fact, the big break for SolarWinds and all of their customers) in December came through dumb luck: an employee noticed that a device they didn’t know had logged into their account. Had that not happened, we might still not know about the attacks, and the Russians would still be busily packing up files they found on the DoE, DHS, FERC etc. networks and shipping them off to Moscow or St. Petersburg (of course, they may still be doing this in organizations that don’t yet know they were compromised and haven’t disconnected SolarWinds).

9.      Finally, Jim said “I think the answer (to the problem of attacks like SolarWinds) will be an extension of the DOD CMMC (the new DoD cybersecurity certification for vendors) process to anybody who wants to sell to the government.  It may also result in FERC/NERC getting pushed reluctantly into the same sort of mechanism for the energy sector.” While I’d heard about CMMC, I hadn’t actually looked at what it requires. It’s a maturity model (which is a good thing), not a set of primarily prescriptive requirements (like a certain unnamed set of cybersecurity standards for the electric power industry that some of you may be familiar with).

10.   However, when I looked through the set of “domains” covered by the CMMC model (Access Control, Asset Management, etc), I noticed that it includes nothing related to software development. If SolarWinds has taught us anything, it’s that the biggest supply chain cybersecurity risks have to do with penetration of a software developer’s own development environment, since that’s by far the most efficient way to penetrate large numbers of organizations with in effect a single keystroke; after all, the one attack on SolarWinds led to “only” (as SolarWinds notably characterized the number in their SEC filing the day after they announced the attacks) 18,000 of their customers being compromised, a great ROI. Name me one other way in which an attacker could compromise 18,000 organizations in one attack. Whoever suggested this attack should be given a medal for Great Hero of the Russian Kleptocracy.

11.   When I pointed this out to Jim, he replied “My point on CMMC is that it’s the beginning of a clearly spelled out set of hygiene requirements for those desiring to do business.  I suspect that it will indirectly offer some benefit, as DOD is a huge consumer of commodity IT/OT products.”

I’m sure there will be lots of benefits – for DoD as well as all US purchasers of IT and OT products – flowing from implementation of the CMMC; it’s certainly better than what’s in place now, which is mostly voluntary. However, by itself it doesn’t address the most significant areas of supply chain risk: insecure software development practices and inadequate security controls on software development environments. If you added these to CMMC, it would be a good model, IMHO. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, January 18, 2021

What could have prevented the SolarWinds attacks?

In one of Energy Central’s emails today, I saw a post by Joe Weiss that looked interesting; it was entitled “SolarWinds Orion: The Weaponization of a Network Management System”. For those who are not EC members, here’s the link to the same post on Joe’s blog (BTW, for about 4 or 5 months I’ve been putting almost all of my posts on EC, as well as in this blog. They’re the same in both venues. I’m quite happy with the level of attention my posts have received on EC).

I confess that I’ve only written a few posts about something Joe wrote, and none of them have been positive. So I’m happy to say now that I completely agree with everything Joe says in the post, which points to a mistake sometimes made with network management systems (NMS), and more often with the devices that are controlled by NMS (including UPS, battery management systems, building control systems and power distribution units): they are placed directly on the internet, not even behind a firewall. Of course, this makes them ripe for attack and compromise (especially given the weaknesses of the SNMP protocol used for network monitoring).

However, Joe admits that the SolarWinds NMS that were compromised by the attacks announced in December were almost all (or probably all) behind a firewall. So how could these attacks have been prevented?

If you read any of the 15 or so posts I’ve written about the SolarWinds’ attacks since they were announced in mid-December – or you read some of the huge number of articles and posts that have been written about this subject by others – you’ll probably know the answer to this question: There’s literally nothing a SolarWinds customer could have done to prevent the attack from happening to them in the first place, although they could have lessened the degree of compromise through various measures.

This is because these were pure supply chain attacks. SolarWinds’ development environment(s) was compromised by Russian attackers, who placed an exquisitely designed piece of malware[i] into their software build process. That malware then placed the Sunburst malware into the code of the updates themselves. Sunburst contained a deliberately planted zero-day vulnerability (which is called a backdoor. A backdoor is just a deliberately-planted vulnerability, as opposed to one that finds its way into a software product because of poor security practices – or just plain bad luck – on the part of the developer. Note that a backdoor can also be planted by the developer themselves, to facilitate easy access during development, or even worse once the product is deployed. The Mirai botnet attack exploited backdoors planted in hundreds of thousands of IoT devices, such as security cameras)."

This means that, when customers loaded one of the tainted updates (it looks like there were about seven such updates), they loaded Sunburst at the same time. The Russians then took advantage of the backdoor to penetrate the customer’s network and do nasty deeds. But don’t worry: Those customers were mostly unimportant ones – the NSA, DHS, DoE, the National Nuclear Safety Agency, FERC, etc. Fortunately, the Russians didn’t get into the White House football pool server.

Because the Russians had placed the Sunburst malware into SolarWinds updates while they were being built, the updates were signed by SolarWinds. This means that signature verification – or comparison of hash values - didn’t raise any red flags about the updates. And since Sunburst used a zero-day vulnerability, it wasn’t picked up by any malware scanners in antivirus software. There is literally nothing an organization could have done to detect these tainted updates, and thus prevent them from being installed.

So what could have at least mitigated the SolarWinds attacks? Joe mentions one measure – not placing the NMS directly on the internet – that I suspect just about every SolarWinds customer already practices. What else could have been done?

Of course, there’s a lot written about that issue (and Fortress Information Security is conducting a webinar on the topic on Thursday, which will most likely be quite interesting). I would just say that in general, network and server monitoring must have really fallen down – or just not been in place in the first place – in many of the SolarWinds customers who were actually attacked (i.e. they not only applied the tainted update, but the Russians exploited the malware to exfiltrate files or in general do bad things on the network. It seems there were “only” 2-300 of those and perhaps fewer than that – vs. the 18,000 who downloaded one of the tainted updates).

I say this because the Russians stopped planting Sunburst in Orion updates in June, meaning it’s likely they were inside every compromised network for a number of months. Of course, they were certainly very careful, but they finally slipped up and were detected because someone who worked for FireEye noticed an unknown login to their account. Did they make zero mistakes between (at least) June and December with every other customer besides FireEye? That’s not very likely. It seems all of those other customers weren’t looking very hard for evidence of attacks or compromise.

What could have actually prevented the SolarWinds attacks in the first place? Clearly, it has to do with SolarWinds’ controls (or more likely, the lack thereof) over their development network(s). There are two components to this. The first is the technical controls that should have been applied to the development network(s) themselves. Those controls are familiar to most power industry networking people, since they’re very similar to the ones required by the NERC CIP standards to protect the electronic security perimeter and the devices within it (including BES Cyber Systems, of course).

Perhaps the most important of those controls are found in CIP-005-6. These include a) complete separation between the IT network and the ESP/development network, including separate authentication; b) tight control over open ports and services on the ESP firewall, as well as on the network devices themselves; and c) requiring all outside access to devices inside the ESP to be via encrypted VPN, which is terminated at an Intermediate System located in the DMZ between the IT and OT (ESP) networks. Maybe in some cases all of these controls aren’t needed of every software developer (e.g. when the developer produces games). However, in hindsight it’s clear that SolarWinds should have done much more to protect its development networks than it did.

There should also have been controls like those in CIP-004-6, CIP-007-6, and CIP-010-3, including background checks on employees with access to the development network, training for them on appropriate security procedures, strict configuration management, logging on all devices on the network, and perhaps multifactor authentication to important devices.

The second component of controls is SolarWinds’ controls on access to their network in general. Even assuming the Russians penetrated the SolarWinds IT network first, how did they do that? We can’t say today what would have prevented the Russians from penetrating that network, since we don’t know how the network was penetrated. However, at least three possibilities have been raised:

1.      It might have been a supply chain attack through a Microsoft Office 365 reseller, as discussed in this post. In this case, this would be the first documented (that I know of) multi-level supply chain attack, where a supply chain attack was used to penetrate a supplier, and from there another supply chain attack was executed against the customers of the supplier.

2.      It also might have had something to do with the fact that SolarWinds had outsourced a lot of their software development work to organizations in Poland, the Czech Republic and Belarus (what could possibly go wrong with that?). A rogue developer could have placed the Sunburst malware in the update code being developed (although this idea goes against the fact that the Russians developed and deployed a very sophisticated piece of malware called SUNSPOT that did everything that was needed remotely; moreover, SUNSPOT painstakingly covered up what it did. In fact, this was almost certainly better than using a human being to plant the Sunburst malware, since they would have inevitably made a mistake and been detected. The SUNSPOT malware was never detected by Solar Winds until it was too late).

3.      Finally, the Russians could have penetrated a software development tool (presumably by planting malware in the tool developer’s network, which would have played the same role that SUNSPOT did with SolarWinds). Then, if SolarWinds used that tool, the Russians wouldn’t have to penetrate SolarWinds’ development network - they would have already been there! This might be the ultimate supply chain attack, for reasons described in this post. 

But how could users force SolarWinds and similar software suppliers to implement these controls? Let’s be clear: The only way to force them to do anything is with some kind of regulation. That may well be in order, since I think it’s clear (in retrospect, of course) that SolarWinds is as much of a critical infrastructure provider as any electric utility. The same consideration applies to other organizations like cloud providers. I believe that ultimately there will need to be mandatory controls on these organizations, perhaps structured something like what’s required by the recently approved IoT Cybersecurity Improvement Act (which requires NIST to develop a framework for IoT suppliers, rather than specifying specific controls. Moreover, the Act directly applies only to federal purchasing, although there’s a high likelihood that it will in fact serve as a standard for all IoT devices).

So barring regulation, what can we do to get software developers in general to improve their level of development security? The same thing we do regarding anything else we want a supplier to do: nudge them along the path of righteousness. This can include questionnaires; use of contract language where possible; other means of asking them to commit to doing something, like – gasp, shudder! – calling them on the phone and asking them point blank; RFPs and other means. Are you guaranteed to get results using any of these means? Of course not. But what is guaranteed is that you won’t get any results at all if you don’t try. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I hope to write a post about that malware soon. There are lots of lessons to be learned from it!

Friday, January 15, 2021

The SBoM webinar has been rescheduled!


I’m pleased to announce that the informational webinar that was scheduled for December 17 but had to be cancelled two days before that date, has now been rescheduled for January 26. As was the case in December, the same webinar will be given at the beginning and end of the day: 9 AM and 4 PM Eastern time (although you’re welcome to attend both, since there will undoubtedly be different questions at the two webinars. Both will be live).

I’ve reproduced below an announcement put out today by Dr. Allan Friedman, director of the Software Transparency Initiative of the National Technology and Information Administration (NTIA), which is part of the US Department of Commerce. The announcement includes the connection information for both webinars. There is no requirement to register in advance.

Because it’s been more than a month since I discussed this webinar or why it’s taking place (and since I may have some new readers who didn’t see my previous posts on this subject), I’ll briefly summarize:

1.      Allan is leading a multi-year “multistakeholder process” to promote the production (by software suppliers) and use (by companies that use software, meaning just about every company on the planet) of software bills of materials (SBoMs). Note that the NTIA does not develop regulations, standards, or even guidelines. Its goal is to help private industry smooth the path to widespread use of promising new technologies (a previous huge NTIA success was DNS).

2.      The group – composed of people from a cross-section of industries – has decided the best way to accomplish their mission is to conduct “proofs of concept” in particular industries. In these, suppliers and user organizations work together to test procedures for producing, distributing and “consuming” SBoMs.

3.      The healthcare industry was the pioneer organization for this. They started their first proof of concept in 2018 and are now in the third “iteration” of their second PoC. In each PoC and iteration, the group has been pushing farther down the road toward full exchange and use of SBoMs, although there is no doubt that much needs to be done.

4.      A PoC is now starting in the Autos industry, and a PoC will soon start in the energy (including electric power) industry as well. Note that, even though the PoCs are for particular industries, SBoMs have universal application and both the formats and procedures are likely to be similar for all industries. This means that each PoC can build on what its predecessors have already established, while at the same time allowing participants to work with other industry suppliers and users (in the energy PoC case, the users will include electric utilities, industry organizations, and possibly government entities).

5.      If you are interested in the PoC, you can either be a direct participant (in which case your organization will need to sign an NDA) or an observer, meaning you can participate in the public meetings, where problems will be discussed and the results analyzed – as well as a final report drawn up. While participants need to be available for at least 1-2 meetings a week, observers can spend as little or as much time on this as they wish (you may also want to sign up for one or more of the NTIA’s four working groups, all of which have weekly meetings).

6.      The webinar is purely informational – to introduce the industry to SBoMs and talk about the healthcare industry’s experience with their PoC’s. There will be one or more further informational meetings that will go into more detail on technical issues for both suppliers and users of SBoMs. Only after these meetings will we start to plan the actual PoC and ask for participants.

I’ll hope to see you there!

Allan’s announcement:

Most modern software is built out of smaller software components. A "software bill of materials" (SBOM) is effectively a list of ingredients or a nested inventory of these components. Visibility of these components is emerging as a necessary step in understanding software and supply chain risk. 

The goal of this info session is to make the case for transparency in the software supply chain, as well as to give an initial overview of the global SBOM work to date. We'll also highlight lessons from how the healthcare sector has come together to learn about and execute SBOM technical and operational details. Participants will emerge prepared to think about how this might impact the electric and energy sector and begin discussions around what a proof-of-concept exercise might look like. These discussions will continue over the coming weeks in greater detail. 

There will be two similar info sessions on January 26 to accommodate schedules. Connection information is below. Feel free to forward. For more information, please contact Allan Friedman: afriedman@ntia.gov

January 26, 9-10am ET

Teams Meeting

Teams Link: https://teams.microsoft.com/l/meetup-join/19%3ameeting_MTUwOGQwYTQtYTY5YS00YmU2LWI5ZGEtMDIyY2MwNDViOTU0%40thread.v2/0?context=%7b%22Tid%22%3a%22d6cff1bd-67dd-4ce8-945d-d07dc775672f%22%2c%22Oid%22%3a%22a62b8f72-7ed2-4d55-9358-cfe7b3e4f3ed%22%7d 

Dial-In: +1 202-886-0111,,760072759#

Global dial-in numbers: https://dialin.teams.microsoft.com/2e8e819f-8605-44d3-a7b9-d176414fe81a?id=760072759 

January 26, 4-5pm ET

Teams Link: https://teams.microsoft.com/l/meetup-join/19%3ameeting_MTUwOGQwYTQtYTY5YS00YmU2LWI5ZGEtMDIyY2MwNDViOTU0%40thread.v2/0?context=%7b%22Tid%22%3a%22d6cff1bd-67dd-4ce8-945d-d07dc775672f%22%2c%22Oid%22%3a%22a62b8f72-7ed2-4d55-9358-cfe7b3e4f3ed%22%7d 

Dial-In: +1 202-886-0111,,760072759#

Global dial-in numbers: https://dialin.teams.microsoft.com/2e8e819f-8605-44d3-a7b9-d176414fe81a?id=760072759 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Thursday, January 14, 2021

NERC’s new CIP-013 FAQ part II


This is the second in a series of posts (at least three and maybe four) on NERC’s new CIP-013 FAQ. I’ve reproduced the first two paragraphs of the first part below, with some modifications. All of the rest is different.

NERC released a new CIP-013 FAQ to the industry in December. It’s based on the CIP-013 Small Group Advisory Sessions that were held in the fall of 2019 BC (Before Covid). This is the second FAQ NERC has released based on those sessions; the first one came out last February. This new FAQ includes all of the questions and answers in the old FAQ, as well as others in an Addendum.

For this post, I continued with the questions in the FAQ, although I skipped a few where I didn’t have much to add to what NERC said. As in the first post, for each question I provided the question, NERC’s response and then my response. Note that almost everything I say in this post is shamelessly stolen from my upcoming book “Supply Chain Cybersecurity for Critical Infrastructure”, which I expect to have published in February, or March at the latest.

What if my vendor cannot adhere to one or more sub-parts (1.2.1-1.2.6) in Part 1.2 for CIP-013-1?

NERC’s answer: The registered entities are still responsible for implementation of Part 1.2 in R1. Registered entities should have documented and implemented controls for Part 1.2 in the absence of vendor adherence. For example, if the registered entity’s vendor is not notifying it of vendor-identified incidents, then it may implement a control that monitors US-CERT, ICS-CERT, E-ISAC, and NERC Alerts.

Tom’s answer: What NERC says is spot on. I would word it more generally that the entity is responsible for mitigating all risks identified in their supply chain cyber risk management plan, whether it’s a risk from R1.2 or a risk the entity identified in R1.1. There are probably a few risks that can only be mitigated by the vendor (although I have yet to identify one of those), in which case you will probably have to accept it if the vendor simply won’t do their part to mitigate the risk.

In general, there’s always something your organization can do to mitigate a vendor risk, when the vendor won’t do it themselves. However, it’s true that whatever you do will almost never be as good as what the vendor could do if they had the inclination. But you still need to do what you can.

What additional frameworks did registered entities consider in development of Supply Chain Risk Management Programs? Furthermore, are entities developing one or more risk assessment questionnaires?

NERC’s answer: Entities considered NIST, NAGF guidance, NATF guidance, EEI guidance, SOC2, and ISO 27001 in developing their SCRM programs. In most cases, registered entities used two risk assessment questionnaires, one for vendors and one for products or services.

Tom’s answer: As far as the first question goes, I’m OK with NERC’s answer, except I don’t know what they mean when they say “EEI guidance”. EEI has put out a set of recommended procurement contract terms based on the R1.2 items (although they go beyond what’s stated in R1.2), but I don’t believe they’ve put out supply chain cyber risk management guidance in general, as is found in the other frameworks or white papers that NERC mentions.

For the second question, I’m sure there are some NERC entities that put out two risk assessment questionnaires, but I have no idea who you would send a “product or service” questionnaire to, other than the vendor. But since you have to send it to the vendor, why do you say it’s different from the vendor questionnaire?

This relates to the question of whether product/service risks are different from vendor risks. I agree there are some vendor risks that have nothing to do with one of their products – for example, that the vendor won’t properly control access to your BCSI, and it will end up in the hands of ISIS. But nobody has yet given me an example of a risk that is due only to the product and not to the vendor. After all, any product risk will have to be mitigated by the vendor, right? Why not call it a vendor risk?

This is more than just a question of semantics. I think a lot of what some people call product risks are vulnerabilities, not risks. Suppose that vulnerability X (maybe CVE X) is discovered to be present in a particular software product. Vulnerabilities happen all the time, and they’re usually patched quickly (say within a month or less, with an emergency patch being rushed out when a very serious vulnerability is identified).  These aren’t risks.

The risk is that a vendor won’t quickly provide a patch for a vulnerability, or provide some other mitigation if for some reason they can’t quickly patch it. Your organization needs to ask your main contact at the vendor whether they promise to patch vulnerabilities quickly. If they won’t promise it, then you should seek to include this requirement in contract language, or simply escalate the issue at the vendor – until somebody with the authority provides you the promise you need (and it doesn’t even have to be written. Just write a memo to yourself after the phone call, documenting what the vendor said. If it was a high enough person that made the promise, the vendor will live up to it, although you may still have to push them on it).

Of course, as I’ve pointed out multiple times in blog posts (and I certainly say this multiple times in my book!), a promise alone – even if it’s made in a contract – doesn’t in itself mitigate any risk. It’s only if the vendor keeps their promise that risk is mitigated. It is up to your organization to verify they kept their promise. The best way to do this is to make sure you have a question relating to the promise on the annual BES cybersecurity questionnaire you send to them. If they promised to do X but the next questionnaire reveals they still haven’t done X, then you need to contact them and ask when they will do X. If they give you a date but blow it off again, that’s when you need to ratchet up the pressure, perhaps bringing in your lawyers if need be (but I believe you shouldn’t involve the lawyers until it’s clear you’re not getting anywhere with the people you’re talking to – and if they’re really playing around with you like this, do you really want them as a vendor in the first place?).

I’ve gone a little off the subject here, but the point is that all supply chain security risks are due to either the vendor or the customer organization. I don’t see “product risks” as a third category, although if you want to use the word with the understanding that you’re really talking about a certain type of vendor risks, I’m fine with that.

What is the registered entity’s obligation to mitigate an identified risk, if the vendor does not agree under the contract, for example, shipping and delivery?

NERC’s answer: A vendor’s intentional or unintentional ability to adhere to the conditions of an agreement as it relates to CIP-013-1 should be identified and assessed as a risk. As with all of the risks, it is the responsibility of the registered entity to mitigate them accordingly. As an example, the registered entity may address this risk by the implementation of internal controls and processes such as using reputable shippers, tracking shipments, and requiring signatures on delivery.

Tom’s answer: NERC’s answer is fine, but I want to add that I consider shipping and delivery to be a risk that applies to the customer, not the vendor. After all, legal title to the product being shipped usually passes to the customer at shipment (that’s what F.O.B. means – free on board. If you want to drive to the vendor and pick the product up, that’s OK. But if you want the product shipped a particular way, it’s up to you to make clear to the vendor that’s what you want and it’s up to them to accommodate you – although you should expect to pay more if you’re asking for something more than “normal” shipping, e.g. requiring chain of custody information).

What is sufficient evidence to document cases in which vendors refuse to meet the CIP-013 R1 Part 1.2 Requirement Parts?

NERC’s answer: In this case, the procurement documents (e.g., RFP and vendor response evaluation matrices) used for a specific applicable procurement, along with any contract language connected to the procurement can serve as primary evidence the registered entity pursued its due diligence for the R1 Part 1.2 Requirement Parts, when the vendor failed or refused to comply. As stated in R2, vendor performance and adherence to a contract is beyond the scope of R2, so the responsibility of compliance rests on the registered entity to demonstrate it implemented its Part1.2 processes as far as it could reasonably go without negating the procurement. Since the registered entity identified risk, it is incumbent on the registered entity to enact mitigating measures that would address the vendor’s refusal to meet the Requirement Parts.

Tom’s answer: NERC’s answer is correct, but they didn’t answer the question. That’s because I believe the person who asked the question was essentially saying “I believe that if I just document that a vendor won’t do something that they’re supposed to do per one of the Requirement Parts of R1.2, then I’ll be off the hook for the Requirement Part itself. If I can just get NERC to tell me what the required evidence is, that will validate my belief and I can start going home at 5:00 from now on.” I don’t think they were looking for NERC to tell them the truth, which is that they’re on the hook to mitigate the risk behind the Part as much as they can, even if the vendor doesn’t cooperate at all.

This means that, if a vendor won’t cooperate, you should focus on showing first that you really tried to get them to do it. You don’t just throw some contract terms at them and, when they refuse to sign them, declare that you’ve done all you can to comply with that Part. You can have a talk with them, explain the importance of the item you’re requesting (e.g. notification of incidents they identify, per R1.2.2), and suggest they give you a letter, an email or even just a verbal agreement (which you then document in a memo to yourself), since contract language isn’t agreeable to them. In fact, whenever there’s a choice, I would try for the letter, email or verbal statement first, and only go to contract language when they refuse to provide one of those others. It will be a lot less expensive to your organization, plus I’d say there’s a good chance they’ll agree to what you want without your putting a legal gun to their head.

It also means that the important documentation in this case is copies of emails, etc. showing that you tried to convince the vendor to agree to – using the example of R1.2.2 – notify you when they identify an incident that could affect the security of one of their products that you have installed in an ESP. It’s not the email they send you when they refuse to do sign the particular contract ter,m you stuck under their nose. If you were turned down when you simply asked them to do this (and I would give it 2-3 tries before giving up), you will need to provide a description (and some kind of evidence) of what you did to mitigate the risk, even though it will never be as good as if the vendor had agreed to do it.

For example, in the case of R1.2.2, one way to mitigate the risk that the vendor won’t notify you of an incident with one of their products is to monitor the ICS-CERT emails and other sources, where you might expect to see an announcement of e.g. a new vulnerability in one of the vendor’s products.

Is it time to review your CIP-013 R1 plan? Remember, you can change it at any time, as long as you document why you did that. If you would like me to give you suggestions on how the plan could be improved, please email me so we can set up a time to talk. Also, if you work for a supplier trying to figure out what you need to do to help your power industry customers comply with CIP-013-1 (without wasting money on unneeded certifications), please contact me.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.