Thursday, September 23, 2021

Is there any doubt that software is the biggest source of supply chain cyber risk?


Sonatype came out with their annual State of the Software Supply Chain report last week. Who’s Sonatype, you ask? I must confess that, until I fell in with a bad crowd at NTIA last year, I hadn’t heard of Sonatype, either – or for that matter, a lot of companies that play an important role in securing our software supply chain. I first learned of them when their 2020 report came out a year ago, and I wrote this post about it.

What struck me most in last year’s report were

1.      The huge number of components found in the average software product (135 to be exact, although my guess is that number has gone up this year, since it’s been growing steadily. It was 73 in 2017. Of course many products have thousands of components);

2.      The large percentage of those that are open source (90%);  

3.      The fact that 11% of open source components have at least one vulnerability; and

4.      The fact that in 2020, a survey of 5,000 developers found that 21% had experienced an open source component-related breach in the past 12 months. This was down from 31% in 2018.

My big takeaway from last year’s report was that the biggest source of cyber risk in software is clearly the components that are included in it (as opposed to the small amount of code that the supplier of the software itself wrote), and the lion’s share of that risk is due to the open source components.

Does this mean you should tell your software supplier that they need to remove all open source components from their product, to reduce your risk? If they don’t laugh in your face (which would be dangerous at this time, given the Delta variant), they’ll probably say something like, “Sure we’d be glad to accommodate you. Let’s see…We already know that our product would be about 20-50 times more expensive if we had to replace the open source components with code we wrote ourselves – and if we even could write all of that code on our own.

“We invoiced you $100,000 when you bought our product recently. If you’d like our open source-free version when it’s ready in five years, that will cost you between $2 million and $5 million, which you’ll have to pay up front. Let’s say $5 million, to be safe. Will that be cash or credit card?”

You get the idea: we wouldn’t have anywhere near the volume of software we have today – and even then, we’d pay a much greater portion of our national income for software – if open source components weren’t available to effectively augment your supplier’s paid developers with an unseen worldwide army of unpaid developers. And software suppliers are becoming more and more dependent on open source components all the time.

However, the bad guys have also noticed that open source use is growing rapidly, so they’re continually finding new ways to take advantage of that growth. And this year’s report shows how successful they’ve been in that quest. In a section titled “Software Supply Chain Attacks Increase 650%” (page 10), the article points out that, in the 12 months starting May 2020, attacks increased from below 2,000 to around 12,000.

But this growth is even more amazing when you compare it to what came before: “From February 2015 to June 2019, 216 software supply chain attacks were recorded. Then, from July 2019 to May 2020, the number of attacks increased to 929 attacks.” So we went from 216 attacks in more than four years to 12,000 attacks in the last 12 months.

And what were these attacks? Little piddling attacks you never read about? Not at all. You can see the rogue’s gallery of attacks on page 12, but just in the last nine months, there were Solar Winds and Kaseya (which could count as 1500 attacks, since 1500 customers of MSPs that used Kaseya software ended up being compromised with malware. However, the article counts this as one attack).

How did these attacks increase so quickly? The article says (page 10):

Legacy software supply chain “exploits," such as the now infamous 2017 Struts incident at Equifax, prey on publicly disclosed open source vulnerabilities that are left unpatched in the wild. Next-generation software supply chain “attacks” are far more sinister, however, because bad actors are no longer waiting for public vulnerability disclosures to pursue an exploit. Instead, they are taking the initiative and injecting new vulnerabilities into open source projects that feed the global supply chain, and then exploiting those vulnerabilities before they are discovered. By shifting their attacks “upstream," bad actors can gain leverage and the crucial benefit of time that that enables malware to propagate throughout the supply chain, enabling far more scalable attacks on “downstream” users.

So it seems that the only thing growing faster than open source software use is open source software attacks. But there’s another really interesting trend. You might wonder if these attacks are happening because suppliers are getting sloppy about security. Au contraire. Later on in the paper (page 17), the authors discuss a metric called mean time to update (MTTU) – which is the mean time that a software supplier takes to update their open source components.

The reason this metric is so important, the paper shows, is that this metric is highly inversely predictive of the security level of the software product (i.e. relative absence of vulnerabilities). What’s really striking is how it’s changed:

2011 average MTTU = 371 days

2014 average MTTU = 302 days

2018 average MTTU = 158 days

2021 average MTTU (as of Aug 1) = 28 days

So at the same time that the number of attacks has gone through the roof, the developers have gotten security religion and improved this metric more than 10-fold since 2014 (302 days to 28 days). Why is this happening? Unfortunately, it seems that the bad guys are learning how to attack software even faster than the developers are learning how to protect it.

Now, do you doubt that software supply chain attacks are not only the most important source of supply chain cyber risk, but probably the most important source of cyber risk, period? I certainly think so.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Saturday, September 18, 2021

What’s cooking at the Energy SBOM Proof of Concept?

The Energy SBOM Proof of Concept – sponsored by the Department of Energy and the National Technology and Information Administration – is entering a new education phase. Due to popular demand, we decided to provide an in-depth look at what goes into SBOMs – i.e. what are the ingredients in the software bill of materials cake, anyway?

In fact, we got so carried away with the cooking metaphor that we’re going to have a series of online “cooking classes”, at which noted SBOM “chefs” will demonstrate how they combine the elements of SBOMs and VEXes, as well as other ingredients, to produce software transparency. Truth be told, this is the ultimate goal of the NTIA software component transparency initiative, of which the energy PoC (and the contemporaneous healthcare and autos PoCs) is one part.

The first cooking class is next Wednesday, September 22 at noon ET. Julia Child being unavailable, we were quite happy to engage Steve Springett, who is co-leader of the OWASP group that develops and supports CycloneDX, one of the three major SBOM formats. Steve is also the initiator of the dependency-track “continuous component analysis platform” (which has been in operation since 2012 and has gained a wide following in its own right). In a few words, dependency track lets you upload SBOMs for the software your organization runs and track vulnerabilities found in components of that software. All for free, of course.

Here’s the webinar information (no registration is required):

Microsoft Teams meeting

Join on your computer or mobile app

Click here to join the meeting

Or call in (audio only)

+1 208-901-7635,,877158748#   United States, Boise

Phone Conference ID: 877 158 748#

Find a local number | Reset PIN

Learn More | Meeting options 

At the following biweekly meeting on October 6, we’re pleased to have Kate Stewart, VP of Dependable Embedded Systems (how’s that for a title?) of the Linux Foundation. Kate has been the leader of the team that developed – and continues to enhance and support - the SPDX format, which started 11 years ago. Two weeks ago, SPDX was “recognized as the international open standard for security, license compliance, and other software supply chain artifacts as ISO/IEC 5962:2021.” Quite an achievement!

Both Steve and Kate will do roughly the following, using a “basic” open source project from an OSS repository like GitHub:

1.      If possible, share the URL for the project with our mailing list in advance;

2.      Walk through how to build an SBOM (in CycloneDX or SPDX format, respectively) based on that project – with a lot of emphasis on explanation;

3.      Discuss basic use cases for the SBOM; and

4.      Show how the same method could be used to support a bigger and more complex project.

There will be time for Q&A. We also hope to be able to distribute some tasty samples, if we can overcome the (probably) small technical problem of decomposing them into digital bits and reassembling them into food at your computer. We have some people working on this problem as I write, and I fully expect they’ll have a solution by Wednesday. How hard can it be?

If you’d like to get a good preview of Steve and Kate, they did a great webinar on the “Roots of SBOM” recently, along with Chris Blask of Cybeats. The webinar was sponsored by Cybeats.

The PoC meetings are open to everybody, even if you’re not directly involved with the energy industry. You don’t have to sign up for the webinar, but if you’d like to be on our mailing list, drop an email to SBOMEnergyPOC@inl.gov.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Tuesday, September 14, 2021

Followup to yesterday’s post on VEX


My post yesterday discussed the concept of VEX, a document that can be described as a “negative security advisory”. The format for this document was recently developed by a working group in the NTIA software component transparency initiative, working in conjunction with the OASIS CSAF project. At the end of the post, I suggested that VEX could solve not just the problem it was designed to address - the fact that a large percentage of vulnerabilities found in components included in a software product aren’t in fact exploitable in the product itself - but a much wider problem.

The wider problem is that security advisories today (both positive and negative) are normally delivered in proprietary, non-machine-readable formats. If those advisories would be provided in the VEX format, there could be huge improvements in efficiency of vulnerability management, both in private industry and in government. In yesterday’s post, I suggested that this might happen someday, although I was careful not to say it was coming anytime soon.

After I put up the post, I decided to watch a video that Allan Friedman (leader of the US government effort to promote SBOMs, until recently at NTIA and now at CISA) had provided to the VEX working group that morning – and I realized afterwards that this should change the tone of what I wrote in that post. But since I don’t want my posts to be too long (God knows they’re long enough!), I decided I’d write this followup post today.

The video was of a presentation that Allan made with Jens Wiesner. Jens is with the German Federal Office for Information Security, which Jens explains is “more or less” the German CISA. Although Jens doesn’t say it, it seems he and the people who work for him are responsible for the government’s cyber vulnerability reporting in Germany. One of his staff members, Thomas Schmidt, worked with the NTIA VEX working group to create the VEX format as a profile in CSAF, the new open source vulnerability reporting standard that will be implemented soon (CSAF replaces the current CVRF standard, which was developed by the same OASIS group). A lot of large organizations worldwide, including Cisco and Oracle, are committed to using CSAF (and by implication VEX, since a VEX just uses a particular set of fields in CSAF) when it’s approved.

While Allan gave a good introduction to the concept of VEX in the first half of the video, I was most impressed by Jens’ discussion (in the second half) of a big problem that his team is experiencing:

1.      Since security advisories nowadays are in proprietary formats (i.e. Cisco’s advisories don’t look like Microsoft’s, which don’t look like Oracle’s, etc.) and they aren’t machine-readable, his staff members spend lots of time just reading advisories and summarizing them in a standard format.

2.      Once the advisories have been summarized, the data are published in a machine-readable format. Of course, if the advisories themselves were all machine-readable, the data could be transferred directly to the publishing formats – completely eliminating the huge amount of time spent having expert humans read and summarize the advisories.

3.      Jens discussed this at length, and you could hear the pain in his voice about the time wasted doing this (even worse, he’s aware that staff members are bored by this work, which of course means he’s probably always worried that some of them will quit). Ironically, there are probably lots of other governments and private organizations that have the same problem Jens has: they pay security professionals a lot of money to read and summarize security advisories.

4.      Another problem with non-automated advisories is that they have to be emailed out. If you’re looking for a particular advisory from a software company you don’t normally deal with, you’re going to have to do some digging around and perhaps calling to find it, since you’re probably not on the supplier’s mailing list to receive advisories. Machine-readable advisories can simply be made available at a URL and updated in real time. Wouldn’t that be nice?

5.      However, there is something holding up Germany’s officially adopting CSAF as the vulnerability reporting framework for government and industry. Jens said it’s the lack of “asset matching” tools on the user side. While he didn’t elaborate on what he meant, I can guess: There aren’t good tools now that will take the machine-readable vulnerability information reported through CSAF and figure out how this applies to individual devices, so they can be scanned and remediated. IMO, this is also the biggest issue holding up full adoption of SBOMs.

6.      Allan and Jens both pointed to this lack of tools as a business opportunity and called on existing or new companies to move to fill these gaps.

7.      However, Jens also pointed to another possible solution to this problem: Third party service providers could ingest VEX (and SBOM) data and perform the asset matching services needed to map vulnerabilities to individual devices on the user’s network. So the lack of asset matching tools doesn’t mean that VEXes and SBOMs can’t be used in an automated fashion; it just means the automation will usually be operated by a service provider, not by the individual user organization.

8.      Ultimately, I’m sure there will be inexpensive and effective tools that will allow at least larger and better-resourced organizations to directly utilize SBOMs and VEXes for vulnerability management; but this might not happen for a few years. In the meantime, I think third party service providers will provide a “bridge” to SBOMs and VEXes for these larger organizations. And medium- and smaller-sized organizations will perhaps always find it more effective and efficient to use these third parties.

9.      Allan and Jens both agreed that machine-readability is the only good solution for vulnerability management in both the near and longer terms, and that it seems inevitable that most security advisories will be machine-readable in the not-too-distant future.[i]

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] It’s also likely that human-readable security advisories won’t disappear, simply because a lot of people still like to read the advisories themselves, rather than let an automated process have all the fun. But, especially for larger software and device suppliers, it’s just about certain that all vulnerability advisories will be provided in machine-readable format, whether or not they also provide human-readable advisories.

Monday, September 13, 2021

What is VEX? A) What reading this blog does to me, or B) A new advisory format that’s just as important as SBOMs?


More than a year ago, the NTIA Software Component Transparency Initiative came to the realization that there was a need for another new type of document, somewhat related to software bills of materials (SBOMs) but serving a different purpose. The initial name for the document was VEX, an acronym for Vulnerability Exploitability eXchange; this name has stuck. I’ve written two posts about this document, the more readable (and recent) of which is this one.  

I’ll let you read the previous post, but my purpose now is to describe why I’ve come to believe that VEXes might end up having as big an impact on software supply chain security as SBOMs, perhaps even more. The NTIA workgroup that’s been working on VEX has so far finished just one document – a one-pager – that describes VEX. It will be published this week (and will be available here). More documents will follow later, and I’m sure I’ll have more blog posts.

You may be wondering what the h___ the workgroup has been doing for a year, if all they’ve been able to accomplish is developing a one-page document. 90% of our work (I’ve been part of that workgroup) has been developing the format for VEX, and working with the OASIS Common Security Advisory Framework (CSAF) project to incorporate VEX as a “profile” within CSAF 2.0. CSAF 2.0 in an “official draft” status, meaning it’s completed the first of two or three steps required for it to be approved as an international standard. It will replace the existing CVRF format, which was developed by the same group. The German government is very involved with this project and intends to approve it as the official standard for vulnerability reporting in Germany (probably other European governments will do that as well).

That work is now done, so that a VEX will simply be a special case of a CSAF document. The purpose of “piggybacking” off an existing standard was to avoid (whenever possible) creating new formats. The NTIA initiative hasn’t created its own SBOM format, either. Instead, they have identified three existing SBOM formats – SPDX, CycloneDX and SWID – as equally worthy of consideration for a software supplier starting to produce their own SBOMs. In fact, last week SPDX was approved as an ISO standard; yet the three formats are different enough – and equally robust – that a supplier should consider all three  before they choose one (and a supplier can certainly produce SBOMs in multiple formats. There’s no need to stick to one).

Now that the format is settled, the VEX workgroup is starting to turn its attention to use cases and “rules of the road” for VEX. It turns out that, as I’ve delved further into these issues, I’ve come to realize that VEX isn’t just an adjunct document for SBOMs; it could come to be a key element of software supply chain security in its own right, as important as SBOMs themselves. Why do I say this? I’m glad you asked. Here’s some of what I’ve come to realize recently, although there’s much more that can be said.

Since the average software product has over 100 components (some have thousands), and since 90% of the average product consists of components, this means that vulnerabilities are far more likely to be identified in components of a software product than they are in the product itself. However, when you search for a product in the NVD, you will just see vulnerabilities that have been reported for the product itself, not vulnerabilities that are found in components of the product - since the NVD doesn't have an SBOM to tell it what those components are. 

That’s the bad news. The good news is that, for various reasons, the majority – and perhaps the great majority - of these component vulnerabilities aren’t actually exploitable in the product itself. That is a good thing, but it does make it likely that a lot of time will be wasted by suppliers and software users, responding to false positive reports of component vulnerabilities.

The solution to this problem is for suppliers to issue notices to their customers that essentially say “Even though CVE Y is found in component X, and component X is included in our product, CVE Y isn’t in fact exploitable in our product.” This might be because the supplier has already patched that vulnerability, but it could also be for various technical reasons. However, these notices need to be machine-readable, just as SBOMs need to be. That way, they can be fed into automated tools for vulnerability and configuration management, rather than have a burdensome manual process in between the VEX (or the SBOM) and the tool.

Every security person is used to receiving notices from software suppliers about vulnerabilities that are found in their products (or “exploitable in their products”); I call these positive software vulnerability notifications. However, the VEX can be thought of as a negative vulnerability notification, since it says that a vulnerability isn’t found in the supplier’s product.

Did VEX invent the idea of negative notifications? Not at all. Suppliers have been notifying users of non-exploitable vulnerabilities for years. The biggest example of this is patch notices: They say that if the user applies a patch to the software they’re running (or they download the current patched version), one or more vulnerabilities that were previously exploitable in the product will no longer be exploitable.

However, it is very likely that the number of VEXes will be overwhelming, far more than the number of patch notices currently put out. For example, since the average software product has 135 components, let’s say that each of these has a 1% chance of developing a vulnerability during any year; let’s also assume that the supplier’s own code in the product (which is actually called the “principal component” by the NTIA initiative) has a 1% chance of developing a vulnerability.

Since only the latter vulnerabilities are likely to appear in the NVD, this means that the NVD could be assumed to list only 1/136 of the total vulnerabilities that are found in the product – if you don’t take into account the fact that the majority of these vulnerabilities are unexploitable. If 90% of component vulnerabilities are unexploitable, this means that, once suppliers are regularly providing SBOMS (which means users will start looking up component vulnerabilities in the NVD), they will want to issue at least 120 VEXes every year, for every vulnerability identified for their product in the NVD. In fact, the number of VEXes per year is likely to be much larger than this, since my guess is more than one vulnerability is identified every year for the average software product.

However, a software product could easily have thousands of components (and some do, including products used every day by corporate and government users), at which point these numbers become huge. If a product has 5,000 components, this means there will be 5,000 component vulnerabilities identified every year, per my assumption. Since 90% of these won’t be exploitable and will need a VEX to state that fact, this means that the supplier of this product will have to issue at least 4,500 VEXes every year for the product.

This is of course a big number, but this explains why VEXes have to be machine readable. A user can receive SBOMs in a human-readable format (usually CSV or XLS files), but trying to keep track of VEXes will probably overwhelm – in a few years, not immediately – any manual effort to do that. However, it will be quite possible to track all VEXes, and to match them to a database of product components, through automated means.

And my feeling is that, once the tooling (or third party services) is developed to process VEXes, other types of negative software vulnerability notifications – especially patch notifications – will also move to the VEX format. Moreover, since VEXes can easily provide notification that a vulnerability is exploitable in a particular product, it seems to me that, in the not-too-distant future, those positive notifications will also move to VEX (right now, they’re mostly not machine-readable, and they’re provided in proprietary formats particular to each supplier. Note that suppliers will still be free to issue those advisories, even when they start providing positive VEX notices of exploitable vulnerabilities, since a lot of users will still want to have an advisory that they can read).

In other words, VEX is coming. And it might turn out to be an oncoming train. You should start thinking about how your organization might jump aboard.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Friday, September 10, 2021

What bothers me most about Joe

This is my 843rd post since I started this blog on the last day of January in 2013. I’ve completely forgotten about a lot of my old posts, but sometimes I see on my dashboard that there’s a cluster of people who have been looking at a post I’d completely forgotten. This happens more often, now that there’s a real search engine on my blog (you have to go to the blog’s main page to use it).

My most recent post, put up last Sunday, was about Joe Weiss. I’ve written about ten posts on him, all quite critical of him. Sunday’s post was no exception, and in it I referred to about four previous posts I’d written about him. As I expected, I saw afterwards that people were reading those posts.

But I was also surprised to see a post from January 2016, that I’d completely forgotten about, get a bunch of hits – in fact, even after reading it now, I have no real recollection of writing it (although it sounds like something I might write, so I have no reason to believe anybody broke into my Blogspot account and wrote it!).

You can read this post, but here’s my Cliff Notes summary of what’s important in it:

1.      In writing the post, I was reacting to a press release by the Foundation for Resilient Societies, which quoted Joe as complaining that “no current or proposed federal regulation requires encryption or other cyber-protection of grid communications with substations.”

2.      As a result, “foreign governments have been able to implant malware” in the grid, presumably by intercepting unencrypted communications between control centers (although he mistakenly calls these “control rooms”, which means something else in the power industry). And why have those governments been able to do this? Because the utilities are using the “public internet” to handle these sensitive communications.

3.      The allegation that communications between substations and control centers had been intercepted by foreign governments was of course a complete fabrication; nobody – and certainly not Joe – has ever introduced evidence that this has happened.

4.      And the allegation that utilities are using the public internet to communicate with substations? I said at the time (and still do), “ I know of no electric utility that uses the public internet to communicate with its substations, encrypted or otherwise. The communications channel is always private (whether carrier-owned or utility-owned), often serial or Frame Relay.” I should have added SONET to that list. Again, a 100% fabrication.

So what’s Joe’s solution? Very simple: The NERC CIP standards need to be revised to require encryption of communications between control centers and substations. For the moment, let’s put aside the fact that there’s no need for encryption on purely private channels. What would happen if we did it anyway?

It’s pretty clear what would happen. Substation communications require responses in fractions of a second. The latency that would be induced by encryption would cause a lot of needed commands (especially opening or closing a circuit breaker) to go unexecuted or to be executed too late to do any good, leading to a lot of grid reliability problems. And Joe knew this in 2016, since anybody involved with substation automation would have told him that.

And this is why FERC, when they ordered NERC to develop a standard for encryption of communications between control centers (which is much less sensitive to latency) specifically didn’t extend that requirement to substation communications. That order, Order 822 (which ordered development of CIP version 6, although the encryption requirement for control centers was incorporated into a new standard, CIP-012) came less than a week before I wrote the post.

The bottom line is that we’re lucky that nobody in the power industry took Joe’s statements seriously then (which they might well have done if they’d been supported by a single shred of evidence). And since Joe’s normal modus operandi of totally unproven allegations – nay, not just unproven, but fabricated out of thin air – continues, nobody in the industry takes what Joe says seriously today, either. Instead of substation communications, Joe now fulminates about the imminent danger from the Aurora vulnerability, level 0 attacks, and of course “hardware backdoors” (as in the Great WAPA Transformer Incident). He alleges – always without bothering to provide a shred of evidence - that all of these threats have been realized in successful attacks. But nobody in the industry believes him.

So why do I bother writing about Joe? It’s because, despite nobody in the industry believing what he says, he still has tremendous influence, due to the fact that so many people in DoE and the power industry are afraid of the trouble he – and his legions of woefully misled fans – can bring down on them. When someone brings up Joe’s latest lie (and there seem to be lots of people who are eager to do that. Joe has a bunch of devoted followers), these DoE people nod and scratch their heads and state very solemnly that yes, these are serious questions, and we need to look into them. Even worse, they do look into them, since they feel they have to – despite the fact that they know there’s no truth to them (for example, think of the huge amounts of time invested last year, in response to the EO, in searching for sources of cyber vulnerability in devices that don’t even have a microprocessor. About 20 of the 25 device types listed in the EO fell into this category).

Yet it’s only recently that government and industry have acknowledged that the most important source of cyber threat to the power grid, or almost any other industry, is software vulnerabilities, whether deliberately planted or (usually) due to poor development practices. Tracking down these vulnerabilities, and especially the poor practices that led to them, is much more difficult than going after the easy-to-understand movie plots about bad devices causing catastrophic grid failures that Joe traffics in. This is why the pressure Joe is exerting on people to investigate his fairy stories – and especially DoE employees who fear for their jobs if they stand up to him – is inevitably causing us to shortchange the real threats we’re facing. Hardware backdoors aren’t one of them.

In my last post, I talked about Senator Joe McCarthy’s lies, but I didn’t emphasize what a terribly destructive effect he had on the US government, and especially on the State Department. Because of him, the analysts who could size up a situation and make a rational decision on the best course of action were all pushed out (or worse) and replaced with hard-line anti-Communist ideologues. Those guys (almost all males, to be sure), led us into the tremendously destructive quagmire of the Vietnam War, as well as other foreign policy misadventures of the 1950’s and 1960’s.

By the same token, very minute a DoE or utility employee spends worrying that he or she will get canned if they don’t treat what Joe says with great respect is a minute they don’t spend acting on the really important threats faced by the grid. This is the real problem with Joe’s clown show. It’s time to call out Joe’s lies for what they are. Repeatedly.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, September 5, 2021

Have you no sense of decency, sir?

Someone forwarded me a blog post by Joe Weiss last week that made me think of an encounter between two other guys named Joe. This encounter occurred on June 9, 1954, the 30th day of the Army-McCarthy Hearings. The hearings were called to investigate a series of conflicting charges between Sen. Joseph McCarthy and the US Army, arising from McCarthy’s year-long (mostly fruitless) investigation into possible Communist influence in the Army. The Army was represented by the eminent lawyer Joseph N. Welch. The hearings were broadcast live to a huge audience on the relatively new medium of television.

At one point in the hearings, McCarthy started attacking a young member of Welch’s law firm who had belonged to a left-leaning organization in college, but who was not involved in the hearings. Because this violated a previous agreement between Welch and McCarthy, Welch interrupted McCarthy with these memorable words: “Let us not assassinate this lad further, Senator; you've done enough. Have you no sense of decency, sir? At long last, have you left no sense of decency?” This 1-2 minute exchange is widely acknowledged to be what finally broke the back of McCarthy’s campaign of character assassination, which destroyed the careers of many people, especially government servants in the Department of State and other agencies.

I think it’s time to ask this same question of another person named Joe: Joe Weiss. Since last spring, he has been pushing an outright lie about a Chinese transformer that was ordered by the Western Area Power Administration (WAPA), which is part of DoE. The transformer was never installed at the substation for which it was intended, the Ault substation outside of Denver. Instead, when it arrived in the US last year, it was diverted to Sandia National Laboratories in Albuquerque, another part of DoE. There, it was presumably pulled apart in a search for…well, something. There has been no announcement of anything having been found, although Joe claims something was found – in fact, he says he knows all about it.

These are the facts:

1.      On May 11, 2020, Joe published a blog post that broke the story of the transformer, and said “When the Chinese transformer was delivered to a US utility, the site acceptance testing identified electronics that should NOT have been part of the transformer – hardware backdoors.” In other words, the transformer was shipped to the site (or at least put in the possession of WAPA), and there something called a “hardware backdoor” was identified in it; the transformer was then shipped to Sandia for analysis.

2.      On May 31, I put up a post pointing out that a Wall Street Journal article had said the transformer was shipped directly from the port of Houston (where it arrived in the US) to Sandia, and was never delivered to WAPA at all. So Joe’s statement that the backdoor was identified when the transformer was being installed was clearly wrong. I discussed a lot more that was wrong with his post, which I’ll let you read.

3.      Some time after that, Joe changed his story to say that the hardware backdoor was discovered at Sandia, not at the Ault substation. But then the question became why the transformer was shipped to Sandia in the first place. Did the government have some reason to believe that the transformer would arrive from the manufacturer already compromised? After all, the transformer had been on order for a year, and every single component installed in it was specified in advance by WAPA engineers – since they wanted to make sure there wouldn’t be any questions about something being planted in it. If there were concerns that a “hardware backdoor” would be installed in the transformer, why wasn’t the order cancelled early on (like in 2019?), saving WAPA a lot of money and their engineers a lot of time?

4.      But not to worry, Joe now has a new story that explains everything! It must have slipped his mind when he wrote the blog post last May, but in fact the backdoor was found when a similar transformer from the same manufacturer was installed at the Ault substation in 2019[i]! He made this assertion in a January article in Forbes, which I wrote about in this post.

5.      However, I regret to say that the January article didn’t convince me of the veracity of Joe’s new claim, since he provided no evidence for this, nor did he explain why he had never mentioned this previously when writing about the WAPA transformer.

6.      But I’m sure Joe wouldn’t have made this statement without evidence, so I was pleased to see that he did provide it during a joint press interview with Mark Weatherford in April. What’s the evidence? Here’s what Joe said (presumably with a straight face, which I certainly wouldn’t have been able to do): “I can read you—I won't even mention the country—an email I got from one of our closest allies. From someone very senior. And it's saying, ‘I am hoping you can help me with something. Regarding the transformer issue you discuss, can you please tell me to what level that information is confirmed?’”

7.      That’s Joe’s evidence! Somebody (high-ranking, of course. Joe never deals with anybody who isn’t high-ranking!) sent him an email asking for evidence on what he was saying about transformers (presumably the WAPA transformer, but maybe not). And Joe points to that email as evidence that a “hardware backdoor” was discovered in a Chinese transformer being installed at the Ault substation in 2019. Generally, when I look for evidence, I look for some sort of statement that indicates the specific event alleged actually happened – not a query from someone outside the country, asking for information on something Joe had told him. But I guess Joe and I don’t agree on what constitutes evidence.

Joe’s new blog post, as is typical for Joe, starts with the assumption that all readers know about this 2019 incident, so he doesn’t even bother to bring out his incontrovertible evidence again. However, this post goes on to implicitly accuse a huge number of federal employees, utility executives, and others (none by name, of course, since Joe has no evidence against anybody in this matter) of what can only be called treason: deliberately ignoring and covering up a massive Chinese threat to the US power grid – a cover-up that continues today, it seems.

·        While the post doesn’t specifically mention executives of WAPA, the clear implication of his story about the 2019 “incident” is that WAPA executives (and their superiors in DoE) were grossly negligent of their obligation to report such a serious event to the appropriate parties – because, as we all know, this never was reported in any way to the power industry. Obviously, the only way such a serious event (which if true should have been considered an act of war, IMO) could not have been reported is if there was a massive coverup. We’re really lucky that Joe is the only person on the planet who knows about this incident. It’s just too bad that Joe himself forgot to mention the event until this year. But I’m certainly not going to accuse Joe of treason!

·        And if Joe’s correct, the White House was seriously negligent in not issuing Executive Order 13920 until May 1, 2020. Joe has been insinuating since his original post last May that the WAPA “incident” led to the EO. So why did it take a year for the White House to act? Was it because the president at the time was a secret Chinese agent? I’m guessing that even Joe wouldn’t say that. But if what he says is true, there obviously was a serious cover-up going on at the White House for an entire year, perhaps not involving the president. Where’s Joe’s outrage about that?

·        Joe points out that the EO was designed to “reduce or eliminate” the use of Chinese made-equipment in the US electric grid, yet he says that 54 more Chinese-made transformers were installed in 2020, and more are on order now. This is of course shocking news! Why haven’t the top executives of every utility that placed one of those orders been hauled into court for violating the EO (at last up to the point that the EO was suspended this January)? In fact, if utilities had really followed the EO, they would have suspended all new investment in the US Bulk Power System (BPS) until they were sure that no product they were buying – and no single component of any product they were buying – wasn’t subject to foreign-owned and corrupt influence, pending the Secretary of Energy deciding exactly what that was (the EO gave the Secretary 150 days to do so). Essentially, all investment in the BPS, from May 1, 2020 through when the EO was suspended by President Biden this January, was illegal. Why hasn’t this been prosecuted?

·        I’ll tell you why the utility executives haven’t been hauled into court: DoE held two open virtual meetings for utility executives last May and June. In those meetings, high-level people went out of their way to tell the utilities not to change anything they were doing because of the EO, and especially not to stop any investment they planned on (although in the end, a lot of investment was deterred just because of the FUD surrounding the EO). Offhand, I’d say this doesn’t square with the idea that the EO was a reaction to finding a serious threat to the entire US power grid, do you? But if Joe’s right – and who am I to say he’s not? – then those DoE higher-ups who gave the briefings were engaged in something very close to treason, since they presumably knew all about the WAPA incident, yet they went so far as to tell utility executives to break the law by ignoring the EO. Why didn’t Joe call then for their immediate imprisonment and trial? And even though he’s more than a year late, why isn’t he calling for them to be tried now? After all, we’re still well within the statue of limitations for this alleged crime.

·        Joe also expresses outrage that, not only was the US utility industry not notified about what was found at Sandia[ii], but “this information has not been shared with our closest allies who also have these Chinese-made transformers.” This is terrible! Why isn’t Joe calling for top officials in the State and Energy departments (under the previous administration, of course) to be grilled by Congress on this, or better yet, immediately indicted? After all, those people knew all about a threat to our allies’ national security, yet they said nothing to them about it. Could it be that Joe is getting soft?

·        Joe doesn’t neglect already-installed Chinese equipment, either. He points out in the post that there were at least 150 utilities that installed Chinese equipment in the years 2018-2020, ranging from small to large (he names about 12 of them). Of course, this is much more serious, since it almost certainly means that there are lots of hardware backdoors installed up and down the US grid, just waiting for the command from their all-seeing Master in the Dark Tower in Beijing to send the entire US into darkness (as soon as the all-seeing Master learns what a hardware backdoor is, of course. And when he does, I hope he tells me).

·        Why isn’t Joe calling for all of this equipment to be pulled from service and thoroughly inspected for hardware backdoors (of course, his own company should do this work, since he’s the only person who knows what a hardware backdoor is)? And if the equipment needs to be replaced, it’s the utilities (or rather, their ratepayers) that should foot the bill for replacements, as well as for alternative power service while those replacements are being ordered and installed. This replacement alone might cause serious outages in many parts of the country, but I’m certain Joe will assure us that this is a small price to pay for ridding our grid of the Chinese hardware backdoors – whatever they are – that according to him are all over the place.

·        Joe specifically calls out two US corporations for being heavily involved in the dangerous activity of installing Chinese-made equipment on the US power grid. One is Alstom Grid, a huge equipment supplier in its own right and part of GE, who ordered a lot of Chinese equipment from 2018 to 2020, presumably for jobs on which they were performing an integration function. The other is Double Tree Systems, which Joe says “is associated with JSHP transformers and other Chinese equipment manufacturers connected to the Chinese government. Double Tree Systems continues to provide critical grid equipment and engineering services, including equipment explicitly addressed in EO 13920, to US utilities. Double Tree Systems not only imports and markets Chinese JSHP transformers in the U.S., but sells a variety of critical grid monitoring products and services.” (Joe often uses Big Bold Letters in his posts. This shows that he is Very Serious)

·        Don’t you think executives of both Alstom Grid (maybe even GE itself) and Double Tree (sorry, Double Tree) should be made to publicly apologize for putting the US grid at such obvious risk, and to pay (preferably out of their own deep pockets) for any and all costs their customers may incur in replacing the Chinese equipment with equipment made in the good ol’ US of A? It seems to me that’s the least they could do.

·        But Joe isn’t done yet. He moves on to EPRI (his former employer, a long time ago), pointing out that in 2014, EPRI announced at Distributech that they were organizing – as Joe says – “a demonstration program of the Chinese-made grid equipment”. Why does he say this? Because one of the companies listed in EPRI’s announcement was – you guessed it – Double Tree Systems. So EPRI should certainly be held to account for their evil deeds, as should such companies as Cisco and Schneider Electric, which had the temerity to be mentioned in EPRI’s announcement, alongside Double Tree Systems. They should have known in 2014 that there would be a serious Chinese-caused grid event in 2019, for which Joe now has incontrovertible email evidence. Or something like that. Who cares about the details?

So it seems there’s no real limit to the organizations that Joe feels should be held to account for the Chinese “penetration” of the US grid that exists mainly in his fevered imagination; and most importantly, there’s also no limit to the individuals in those organizations who will have their careers set back or ended because of this. But who cares about their careers! Let the investigations and purges begin!

In all seriousness, why am I writing yet another post debunking Joe’s claims, when my previous posts and other efforts (including a “Defense Use Case” produced by Rob Lee and Tim Conway of SANS last year) clearly haven’t stopped him from promulgating this set of lies (and let’s be clear, these are 100% fabrications, not based on any real incidents at all)?

I’m doing this because, for some reason I honestly don’t understand, Joe feels it’s important to double down on his lies by implicitly accusing what must have been thousands of people of participating in covering up serious national security threats, and most likely actively working with Chinese interests to further those threats. This includes large numbers of people who work or worked for major electric utilities and utility organizations like EEI and the ESCC (who of course should have sounded the alarm about the 2019 “incident”), the Departments of Energy and State, the White House, Cisco, Schneider, EPRI…and on and on.  

I must admit, I previously considered Joe’s stories to be a big joke – they make him look like an idiot, but I thought they were harmless to anyone else. But the events of January 6 show that fabricated allegations of near-treason can lead to real consequences for completely innocent people – who it seems are usually government employees (fortunately for the people who attack them, government employees are almost never allowed to fight back against either physical or verbal attacks). On Jan. 6, those consequences included death and bodily injury.

The fact that Joe continues to hammer away at his made-up stories, and implicitly accuses these people of endangering the American way of life, makes it increasingly likely that Joe’s followers – and he does have a lot of devoted followers who are convinced he’s the only person who really understands the danger our country is in – will move to start destroying careers, like Joe McCarthy did. In fact, I heard this week of an incident that indicates this may already be starting. The person who reported it to me doesn’t want me to use his name, because, even though Joe’s allegations are easily proved to be false, he’s concerned his career will suffer anyway if it’s understood he’s standing up to Joe. Of course, this is exactly how McCarthy worked – he had every agency in Washington fearing they would be the next one in his sights if they so much as said a word against him.

That is, until Joe Welch said, “Have you no sense of decency, sir? At long last, have you left no sense of decency?”

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I don’t doubt Joe’s assertion that a transformer from the same Chinese manufacturer was installed at Ault in 2019; I haven’t looked it up, but I imagine it’s a matter of public record. It’s his assertion that a “hardware backdoor” – or any kind of backdoor – was found in the transformer that is clearly untrue. 

[ii] Of course, nothing was found when the transformer was examined at Sandia last year, since clearly there have been no warnings or notices of any kind. As evidence that something really was found, Joe points in his post to a video by a Georgia Senatorial candidate (and former Navy Seal, which of course means that what he says must be true) who says over and over again that he’s sure China is very bad. After one or two gun ads in five minutes, I got tired of watching the video, although I’m told that it’s (unintentionally) hilarious. It’s possible that he got around to saying something about the transformer later in the post, but it’s quite hard to see this as any more “evidence” than the email already mentioned.

 

Wednesday, September 1, 2021

Why the IoT Cybersecurity Improvement Act will probably fade away


My last two posts have discussed the two mandatory cybersecurity “regulations” – the IoT Cybersecurity Improvement Act of 2020 and the IoT device “labeling” requirement in the May 12 Executive Order - that were promulgated within about six months of each other (under two very different presidents) recently. At first, it might seem that these couldn’t be more different. Here are some of the differences:

1.      The Act is a law, approved by both houses of Congress and by the president. The EO is simply an order that can’t in itself override a law and could be overridden by another law, if Congress were inclined to pass one.

2.      The Act focuses entirely on IoT devices, while the EO is a sprawling attempt to improve cybersecurity in the federal government, on many different fronts.

3.      The Act focuses entirely on federal agencies, requiring them to incorporate cybersecurity concerns into their procurement terms and conditions for IoT devices. Of course, there’s no doubt that the intention of the Act was to have the Feds set a standard for private industry, but there’s not a word in the Act itself about that. On the other hand, the device labeling provision in the EO, while also aiming directly at procurement by federal agencies, repeatedly speaks of “education” and “consumers”. For example, paragraph (s) of section 4 includes the phrase “educate the public on the security capabilities of Internet-of-Things (IoT) devices and software development practices”. And paragraph (t) says that the labeling program should be “compatible with existing labeling schemes that manufacturers use to inform consumers about the security of their products” (you can read both paragraphs in full in my previous post).

4.      The Act seems to be a variation on a fairly familiar theme: require federal contractors to be assessed against a new standard that would be developed by NIST (and has been. More on that in a moment). On the other hand, just the name “device labeling” was a signal that this is a very different type of cybersecurity regulation than has been seen previously in the US – although it has been used to a limited degree in Europe and Southeast Asia.

I had also considered these to be two very different regulations. However, when I recently sat down to look at them carefully, I began to see more similarities than differences. In fact, I’ve decided the two are so similar that I really doubt they’ll both be enforced. Since I see all of the momentum as being behind the EO, and since I haven’t seen any signs at all that anybody’s even preparing to enforce the Act (OMB is in charge of enforcing both regulations, since they’re the regulator/whip-wielder for federal agencies), I don’t think it will be long before the Act either officially or unofficially sleeps with the fishes.

So why do I think the Act and the device labeling requirement are so similar? Two things:

1.      The “meat” behind both the Act and the EO is requiring federal agencies to require contractors to be assessed based on a standard (or framework) to be developed by NIST. Even though the mechanism for this assessment is called a “device label” in the EO, if you dig into it and look into how the “requirements” for the “label” will be developed, they’ll be specified by NIST, just as the framework for the assessments under the Act will be specified by NIST. Since the subject of both the label and the framework is overall cybersecurity of IoT devices and they’re both developed by the same agency, does anyone doubt that they’ll be substantially the same?

2.      But if you do doubt this, you should put that aside now. Another thing I realized as I was studying the two “requirements” was that the framework behind both of them will be the same: NISTIR 8259D. Folks, when you’re talking about two different regulations, they can’t be any more similar than if they’re both based on exactly the same set of “requirements”.[i]

Maybe you now see why I came to believe that both the EO and the Act can’t stand. They require the same organizations (federal agencies) to take the same actions (require suppliers of IoT devices to be assessed for cybersecurity), with respect to the exact same subject matter (NISTIR 8259D). One often hears about redundancy in federal programs (I’ve sometimes mentioned the Department of Redundancy Department), but believe it or not, there are people who do watch for these things and flag them.

So why do I think the Act will fade away? Mainly because I haven’t heard anything about it being enforced, even though it gave OMB a six-month deadline to start the process (which would have probably passed in July, if not June). On the other hand, there’s been a lot of activity on the IoT device labeling requirement. NIST has scheduled a two-day conference (virtual, natch) for September on this and the other device labeling requirement, found in Section 4 paragraph (u) of the EO, on secure development practices for consumer software.

And just as in the case of SBOMs, NIST – this time in conjunction with the Chair of the FTC as well as representatives of other agencies – must make final decisions on the program by early February, 270 days after the date of the EO. Since a lot more decisions are due then, that will be a busy time around NIST. They might not be able to celebrate Groundhog Day next year.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the National Technology and Information Administration’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] BTW, I think that NISTIR 8259D is quite good. I hope to discuss it in more depth one of these months.