Tuesday, November 23, 2021

Patrick Miller gets SBOMs (almost) right


Noted power industry cybersecurity consultant Patrick Miller recently put up a blog post consisting of a recorded interview and transcript of a discussion of software bills of materials. He did a good job of articulating the most important argument for having SBOMs: they will allow software users to find out about dangers lurking in components of the software they run, vs. being totally blind to those dangers.

However, Patrick gets off track when he discusses how the users will actually learn about those dangers, in this passage:

PATRICK:

Effectively what happens is (the users) take (the SBOM) from the manufacturer, they use a tool to compare the two, and they know what's in there. They can say, 'Yes, all the things we expect are in there and nothing else is in there. They didn't insert garlic into my soup and now I'm going to get sick,' for example.

GAIL:

As part of the critical infrastructure, I ask for the SBOM, the manufacturer produces an SBOM, I run my tool against what they've given me, and it comes back a little different. What happens then?

PATRICK:

Well, that's when you have a risk. You have to have the conversation with the vendor to find out: did you actually get what you expected?

In some cases, there are things like file integrity and certificates that can go a long way to helping this. In some cases, it may just be that there's a mismatch in the tools. But that's an interesting conversation for you to have with your vendor and should be the first thing you do if things don't match up. You need to talk to your vendor right away, because things should line up.

I think Patrick is describing a sequence like this:

1.      The user receives software (either new or an update to software they’re already using); the supplier provides an SBOM with it.

2.      As a check on the supplier, the user uses a binary analysis tool to create an SBOM from the delivered software binaries (this isn’t an easy thing to do for various reasons, but it’s possible for someone with the right software development experience).

3.      The user compares the SBOM they received from the supplier to the one they created and looks for discrepancies.

4.      If they find a discrepancy, the consumer talks with the supplier to find out whether this was due to an innocent mistake, or whether it was a deliberate attempt to hide something – perhaps there’s a vulnerable component in the software and they don’t want the customers to know about it?

However, this sequence simply couldn’t happen, for several reasons. First, SBOMs are almost all generated by automated tooling on the supplier side, usually as part of the software build process – and I’m talking about machine-readable SBOMs, which are the only ones that scale. There are single software products, including one that’s installed today on millions of corporate desktops across the US, that have upwards of 5,000 components; the average product contains over 100 components. It’s kind of hard to create those SBOMs by hand. While it would certainly be possible to edit the JSON or XML code that constitutes the SBOM after it’s been produced, it wouldn’t be easy.

More importantly, it’s just about 100% certain that there will be a mismatch between an SBOM produced from binaries (as in step 2 above) and one that’s created with the final software build, without the supplier having to obfuscate anything. There are various reasons for this, including that there are almost always other files or libraries that aren’t part of the software as built, but that are essential for it to run; the binary analysis will capture these files, as well as those for the “software itself” (and unfortunately, the fact that SBOMs created from the build process don’t include files installed with the software means that the SBOM your supplier sends you will almost always leave out these additional files – and therefore you won’t learn about vulnerabilities in those files. This is an ongoing problem with SBOMs, although it’s certainly not a fatal one).

A bigger reason is the one I alluded to in step 2 above: creating SBOMs using a binary analysis tool is always going to be much harder than creating one through the build process, since the output from the tool will inevitably miss a lot of components and misidentify others. That’s why there is a lot of judgment required to clean up the output from the tool; and even after applying that judgment, the resulting SBOM will virtually never match an SBOM produced from the final build.

Binary analysis is required in order to learn anything at all about the components of legacy software, for which no SBOM was produced at build time (i.e. just about all legacy software today). It’s like a colonoscopy: It has to be done, but it certainly isn’t a lot of fun.

More importantly, if the supplier decides to alter one SBOM (perhaps by renaming a few vulnerable components with the names of non-vulnerable components), they are going to have to replicate this work in future SBOMs for the same product. A new SBOM needs to be generated with every change to the software, which means with every new version and every patch application. The devious supplier will have to make the same change in all of these new SBOMs.

But the biggest reason why Patrick’s scenario is highly unlikely is this: Why should the supplier go to a lot of trouble to obfuscate vulnerable components, when virtually any SBOM you look at has vulnerable components (i.e. components for which one or more open CVEs are listed in the NVD) – and it’s certain that new vulnerabilities will be identified in those components in the future?

The problem isn’t so much that there are vulnerable components. The real question is how the supplier deals with component vulnerabilities, whenever they appear.

A 2017 study by Veracode stated “A mere 52 percent of companies reported they provide security fixes to components when new security vulnerabilities are discovered.” I would hope that number would be higher nowadays, but one thing is certain: It will definitely be higher, once most software suppliers are distributing SBOMs to their customers. Components in the software will have fewer vulnerabilities, plus suppliers will work to patch those vulnerabilities more quickly than they would have otherwise (if they would have patched them at all).

Is this because the suppliers have all just experienced a Road to Damascus moment, and decided to completely change their former ways? No, it’s because it’s human nature to pay closer attention to doing your work correctly when somebody is looking over your shoulder. That’s why the SBOM effort by NTIA (and now CISA) is called the Software Component Transparency Initiative. It’s all about transparency.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, November 22, 2021

Another good SBOM webinar


Cybeats is going to unleash the fourth in their great series of webinars on software bills of materials on November 30 from 1-2 Eastern Time. Here’s their description of this one:

Software Bills of Materials (SBOMs) are beginning to arrive at critical infrastructure operators’ doors. The promises of much more rapid responses to cybersecurity vulnerabilities and other tangible benefits are waiting in the wings to be proven or disproven. What will be the impact of this new form of visibility and information sharing actually be?

Tune in on November 30th to learn more on how SBOMs may well prove critical to securing our critical infrastructures with our fourth episode of The State of Cybersecurity Industry: SBOMS impact on critical infrastructures.

The guests are:

·        Dr. Allan Friedman of CISA, leader of what was previously called the Software Component Transparency Initiative when he was at NTIA. The initiative continues, but its new contours will be discussed in December.

·        Ginger Wright of INL, my co-leader of the SBOM Energy Proof of Concept.

·        Tim Roxey, who has appeared in these posts a number of times in the past, and always has quite interesting things to say.

·        Chuck Brooks, who I only met recently but seems to have a good instinct for where the needle is moving in cybersecurity.

The event link is here. You don’t have to register, but they encourage you to; the button for that is on the same page.

See you then!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Friday, November 19, 2021

Where are we going? How will we get there?


When I’m looking for guidance on a decision, I often turn to the great 19th century scholar Charles Dodgson, who wrote on mathematical logic. His two greatest treatises on that subject were written under the pen name Lewis Carroll: Alice in Wonderland and Through the Looking Glass.

Near the beginning of the first treatise, after Alice has fallen down the long rabbit hole and emerged in Wonderland, she has no idea where she is and has the following exchange with the Cheshire Cat:

Alice: ‘Would you tell me, please, which way I ought to go from here?’
The Cheshire Cat: ‘That depends a good deal on where you want to get to.’
Alice: ‘I don't much care where.’
The Cheshire Cat: ‘Then it doesn't much matter which way you go.’
Alice: ‘...So long as I get somewhere.’
The Cheshire Cat: ‘Oh, you're sure to do that, if only you walk long enough.’

What has been known until now as the Software Component Transparency Initiative of the National Technology and Information Administration (part of the US Department of Commerce) finds itself currently in somewhat the same position as Alice. The leader of the Initiative, Dr. Allan Friedman, moved a few months ago from the NTIA to CISA (which is of course part of the Department of Homeland Security).

The Initiative is a “multistakeholder process” – a special type of “organization” that the NTIA has deployed in many situations (there is currently a large multistakeholder process going on for 5G – much larger than the one for SBOMs). The idea is to have participants in an industry get together to agree on rules that apply to a new technology, without even mentioning the dreaded word “regulation”. However, CISA does things differently (although they aren’t interested in becoming a regulator any more than NTIA is, as their Director Jen Easterly made clear just last week), so this process can’t continue. And one can argue that the multistakeholder process has now outlived its usefulness, anyway.

There is agreement among the people who have been participating in the Initiative, that we would like to continue in some form. It is to discuss what that form will be, as well as to provide general instruction on what SBOMs are and how they can be used, that Allan has scheduled the first annual (hopefully) CISA “SBOM-a-rama” for December 15 and 16, at 12-3 PM ET on both days. This will be a two-day event:

1.      Allan describes the first day thusly, “The first session will focus on education, bringing the broader security and software community up to speed with the current understanding of technology and practices, and offer the opportunity for some questions and answers for those relatively new to the issue and technology.”

2.      Here’s his description of the second day: “The second day will focus on identifying the needs of the broader community around SBOM, and areas of further work deemed necessary for progress. This could include specific technical issues and solutions, operational considerations, or shared resources to support the easier and cheaper generation and consumption of SBOM and related data.” This is where I expect the two questions listed in the title of this blog to be asked. As long as there is agreement on at least the first question, I’ll be happy with that. Discussion beyond that will be exploratory, but will continue in future meetings, however they’re structured.

Who’s eligible to attend this. The requirements are quite rigorous, I’m afraid:

1.      You must have a working command of the English language.

2.      You must have an interest in SBOMs and how they can help you secure your organization, even if you know very little about them.

3.      You don’t have to have software development experience. If that’s a requirement, I can’t attend either.

I’ll publish the meeting information when it’s available.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, November 17, 2021

Cloud providers are wasting their time pursuing NERC CIP

This week, my friend Maggy Powell of AWS put up a post on LinkedIn that provided a link to their most recent document regarding NERC CIP, described by Maggy as the “AWS User Guide to Support Compliance with NERC CIP Standards”. She further states that “The User Guide describes key concepts for customers considering CIP regulated workloads in the cloud.”

Dale Peterson asked me for my comments on the document. Before I downloaded it, I pointed to this post from last year, where I tried to summarize the problem preventing NERC entities from deploying Medium or High impact BES Cyber Systems in the cloud (they’re free to deploy Low impact BCS in the cloud now). So I reviewed (skimmed, I’ll admit) the AWS document to see if it had anything to say that would change the situation enough to make it at least possible that Medium or High BCS could be put in the cloud.

It didn’t. Like the document and presentation that Microsoft Azure prepared for the NERC CIPC (remember the CIPC?) in around 2016, AWS seems to think that what needs to be done is just convince NERC and utilities that AWS has good security. That has nothing to do with the real problem, as my previous post explains. There’s literally nothing that AWS, Microsoft, or anyone else – other than NERC, the Regions, the NERC entities, and FERC – can do to change the situation, absent a wholesale revision of the CIP standards. I replied to Dale:

I skimmed through the AWS document, but it was unfortunately as I expected. It tells you everything you need to know about AWS security, except the one thing that matters for CIP: How AWS could possibly produce the evidence required for the utility to prove compliance with about 25 of the CIP requirements, if they put BCS in the cloud.


And the answer to that question remains what I wrote last fall: There's no way any cloud provider could do that, without breaking their business model.


NERC CIP won't permit BCS in the cloud until it's completely rewritten as a risk based compliance regime (which involves revising the NERC Rules of Procedure as well). What's also required is for the focus on devices to go away, and the new focus be on systems. This is exactly what the CIP Modifications SDT proposed in 2018 (a year or so after Maggy left as chairperson), and it got shot down by the big utilities, because they didn't want to have to make big changes to their procedures, etc.


That's the barrier. Until that's overcome, BCS will never be in the cloud, period. I don't see any movement toward this currently, but I'd be glad to help out the insurrectionists if they materialize.

I’ll close by paraphrasing the ending to my post linked above:

Of course, changing CIP will require a much more fundamental revision of the CIP standards than even CIP version 5 was. Doing what I’m suggesting will require widespread support among NERC entities, and I see no sign of that now. Does that mean BCS will never be allowed in the cloud?

I actually believe it will happen, although I won’t say when, because I don’t know (it definitely won’t be soon). I think the advantages the cloud can provide for NERC entities are so great that they will ultimately outweigh the general resistance to change. But the NERC entities themselves need to be able to change. Until that happens, there will be no BCS in the cloud, period.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, November 14, 2021

It seems DoD is willing to admit it made a mistake. Next, the TSA?

Last week, Mariam Baksh of NextGov wrote quite an interesting article about how the Defense Department is rethinking its Cybersecurity Maturity Model Certification (CMMC) program for certifying the cybersecurity of defense contractors through third-party audits. Of course, rethinking a program isn’t too unusual in the government (or in private industry), but doing so a year after the program was launched is quite unusual.

However, I’m glad they did this, since I now realize – after not having paid much attention to the program before, to be honest – that CMMC would have been a disaster if it had really been implemented as written. I’m glad to see that DoD is going to actually – get this – consult with the contractors being regulated as they revise the program.

This is in contrast with the TSA, which gave the pipeline industry only three days to comment on the cybersecurity order they were developing, and enjoined anyone who had seen the order from revealing anything about what’s in it. Although I can’t particularly blame them for that last part, since what’s in the order is pretty  embarrassing.

The big problem: The TSA order requires a bunch of mitigations that are impossible to achieve (“Prevent users and devices from accessing malicious websites…”? Piece of cake! All you have to do is identify all of the “malicious websites” in the world, update the list minute-by-minute, and seamlessly block every URL. What could be simpler?). The only mitigation it doesn’t require is what would have actually prevented the Colonial Pipeline attack. It seems that wasn’t even considered (aka “The light is better here”).

Fortunately, I’m certain the order will never be implemented, since some consultation with the pipeline industry would have shown TSA that full compliance with the order would probably be beyond the means of any pipeline company, period; and in the end, no regulation that literally can’t be complied with will be allowed to stand. Which brings me back to the CMMC. That’s based on NIST 800-171. This document has many more requirements than the TSA order, although they’re much more – how can I say it? – sensible than the TSA requirements. However, NIST 800-171 shares with the TSA order the fact that it lists mitigations, not risks.

It also shares with the TSA order the fact that it doesn’t address the most important risk in the domain being addressed. NIST 800-171, which is a supply chain cybersecurity risk management standard, omits any mention of software supply chain cyber risks, which are without doubt the most important supply chain risks today (my guess is 800-171 would be very different if it had been written after SolarWinds and EO 14028).

Cybersecurity is inherently a risk management process, requiring three steps of the organization:

1.      Identification of the high-level risks to be mitigated - i.e. the cybersecurity “domains” being addressed;

2.      Identification of low-level risks included in each domain, that are applicable to the organization and the environment in which it operates; and

3.      Identification of appropriate mitigations for those risks – meaning appropriate for the organization and the environment in which it operates.

Any cybersecurity standard needs either to require that the entities being regulated take these steps, or – if whoever drafts the regulations doesn’t trust those entities – take them on its own, and simply require the entities to implement mitigations for the risks that the regulator has identified (i.e. what I and others call the prescriptive approach). The latter is the approach that both the TSA pipeline order and CMMC/NIST 800-171 take. It’s also the approach that most of the NERC CIP requirements that were written as part of CIP v5 take (e.g. CIP-007 R2 and CIP-010 R1. Fortunately, literally all of the CIP requirements and standards written after CIP v5 are risk-based, since the industry seems to have finally learned its lesson about prescriptive cybersecurity requirements).

The prescriptive approach would work great if

A.     Whoever wrote the requirements had perfect knowledge of all current and future risks in the domain being regulated – e.g. pipeline or Bulk Electric System operations;

B.     Those persons also had perfect knowledge of all current and future mitigations for those risks, and could choose the best ones;

C.      The entities being regulated are similar enough that the risks and mitigations that are appropriate for one will be substantially the same as for another (of course, we know well that’s true in the power industry, where there’s very little difference between say ConEd and a coop in the middle of Nebraska); and finally

D.     The requirements are written so that, taken as a whole, they won’t pose an undue burden on an organization of any size, given that I know of no organization that has an unlimited budget for cybersecurity mitigation.

Needless to say, no person and no single organization meets the first or second criteria, and very few if any groups of regulated entities meet the third criterion. As for the fourth criterion, it would also take a group of people with godlike powers of perception to draft such requirements. The problem is that people who aren’t gods will inevitably err on the side of over-regulation. They’ll list a requirement for everything they can think of that could be important, with no consideration of whether having to meet every requirement might literally bankrupt most of the organizations that have to comply with the requirements.

Then how should a cybersecurity regulation be written? I’m glad you asked that. Looking at the three steps listed above, I believe the regulation itself should accomplish the first step – that is, the regulation should identify the high-level risks to be mitigated. This at least gives the organizations being regulated a place to start, as opposed to telling them to identify risks starting with a blank piece of paper.

Then the organization being regulated should take the second two steps on their own, although with oversight (and advice) from the regulator:

2.      Identify low-level risks included in each domain, applicable to the organization and the environment in which it operates; and

3.      Identify appropriate mitigations for those risks – meaning appropriate for the organization and the environment in which it operates.

Are there any cybersecurity requirements or standards that require these three steps – and nothing more? I’m sure there are a few, but the closest requirement that I know of is NERC CIP-010 R4, the requirement for cybersecurity of “Transient Cyber Assets” (e.g. laptops) and “Removable Media” (e.g. USB drives) used temporarily at power facilities like substations. This requirement doesn’t actually mention risk at all, but it requires a plan that includes ten sections describing mitigations for specific risks like “Introduction of malicious code”, “Software vulnerabilities” and “Unauthorized use”. These are high-level security domains, for each of which the utility has to develop appropriate mitigations.

The requirement even suggests high-level mitigations in each domain, that the utility might decide to implement. And these mitigations almost always provide the option of “Other methods…” of the utility’s own choosing. However, if the utility decides to implement another method, they need to convince an auditor that the method they chose does as good a job of mitigating the risk in question as the examples listed in the requirement.

How about NERC CIP-013? That’s definitely a risk management standard, but it doesn’t list high-level risks. It just tells the utilities to identify supply chain cybersecurity risks on their own, without stating that they should consider domains like software security, manufacturing security, software vulnerability management, etc. Therefore, in my opinion, it doesn’t make the cut. CIP-013 also doesn’t require mitigations at all – although that was clearly due to a simple oversight by the drafting team.  

So I’m glad to see that DoD is going to revise the CMMC program, since I simply don’t see how it can be fully implemented as written. And I’m especially glad to see that they’re going to get input from the contractors who will be regulated (some of them, anyway), rather than trying once again to require them to address every cybersecurity requirement that they could think of, with no regard for what’s the best way for contractors to mitigate the most cybersecurity risk possible on a non-infinite budget.

Of course, I can’t blame the folks at DoD for thinking that other organizations have infinite budgets. If DoD says something is needed and asks for it passionately, they’ll get it. And if they keep asking for more and more and that results in higher costs, they’ll get the funds needed to meet those higher costs, too. Plus, if a bunch of nosy reporters ask about the reasons for those higher costs, the answer will be – of course – classified. Just look at the F-35.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, November 11, 2021

Upcoming Fortress webinar: “Patch poisoning”


I’ve said many times in this blog that I think software supply chain security attacks and ransomware attacks are the two biggest cyber threats in the world today. And in terms of growth rate, the former have it all over the latter. Ransomware is now a fairly mature threat, and while it’s definitely growing, it’s not growing at anywhere near the rate of software supply chain attacks.

In this post, I pointed out that Sonatype’s annual State of the Software Supply Chain report in September said that, in the 12 months starting May 2020, software supply chain attacks increased from below 2,000 to around 12,000, or 650%. By contrast, from February 2015 to June 2019, 216 software supply chain attacks were recorded. So we went from 216 attacks in more than four years to 12,000 attacks in the last 12 months. Wouldn’t just about any CEO be willing to sacrifice one or two limbs to achieve that rate of growth? And now ENISA, the EU Agency for Cybersecurity, predicts that there will be four times as many software supply chain attacks in 2022 as in 2021.

Fortress Information Security, as you may know, is in the business of supply chain cybersecurity, and they are quite aware of the importance of software supply chain attacks. That’s why they’re hosting a webinar on Thursday November 18 from 11 to 12 EST titled “Patch Poisoning: Critical Infrastructure’s Backdoor”. They’ve put out a white paper to go with it, titled “Patch Poisoning: Software Supply Chain Attack Detection and Prevention”, which does an excellent job of explaining a number of serious attacks that you (and I) have barely heard of. No developer credentials are required!

“Patch poisoning” – a wonderfully descriptive term that I’ve never heard before – is a great way of characterizing software supply chain attacks. They literally poison the trusting relationship that develops between suppliers and customers of any product, but especially software. Of course, Exhibit A is the 18,000 SolarWinds customers who allowed the seven “poisoned” updates of SolarWinds Orion to be automatically installed on their networks. And why shouldn’t they have done so? They’d trusted SolarWinds for years, and there was no reason to even consider not letting these updates install again. After all, the updates wouldn’t even have been allowed to install if they weren’t digitally signed by SolarWinds…And if you can’t trust an update signed by your supplier, what can you trust?

Good question. Sign up for the webinar (and read the white paper) to learn the answer. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, November 10, 2021

The joy of VEX, part II

 

When I wrote the first part of this series, I was thinking there would be only two parts. However, I now realize there will definitely be more than that, and they will likely be spread out (since I have severe ADD and can never focus on a single subject in a set of consecutive posts).

I also realized that, in order to have a firm foundation for the posts to follow this one, I need to restate most of what I wrote in the first post and focus on exactly how VEX can greatly expand the possible types of vulnerability notifications – as well as why that is a good thing. 

Vulnerability notifications have almost always been of one type: A notification that a particular vulnerability (usually a CVE) is exploitable in a particular software product (or set of products), and in a particular version (or set of versions). This notification is almost always accompanied by an announcement that a patch is available that will fix the vulnerability, in the version specified in the notification.

In 2020, the Software Component Transparency Initiative of the National Technology and Information Administration, which has been conducting a “multistakeholder process” to establish guidelines for production, distribution and use of software bills of materials (SBOMs), identified the need for a new type of vulnerability notice: a notification that a vulnerability is not exploitable in a particular software product.

At first, it seems strange that one would want to put out an announcement about a vulnerability not being exploitable. After all, previous announcements about a vulnerability being exploitable were always accompanied (sometimes in the same document) by an announcement that a patch is available that will fix the vulnerability. If you know the vulnerability is exploitable, you presumably will have all the motivation you need to apply the patch.

But if a vulnerability isn’t exploitable, then the announcement is essentially saying “Relax, you don’t have to do anything about this.” Do human beings really need motivation to do nothing about a problem? Most of us (maybe some of you are exceptions, but I doubt it) are quite happy to get a pass on action, since there are close to an infinite number of actions that we know we should be taking, but we aren’t – volunteer in a refugee camp in Bangladesh, sell our gas-powered car and take public transportation everywhere, move next door to our parents so they don’t have to fly across the country to see their grandkids, etc. Finally, here’s someone telling us we shouldn’t do anything!

As I’ve explained in various posts (including in part I, but even better in this post), the NTIA initiative (and others) realized that, because a large percentage of vulnerabilities in software components aren’t in fact exploitable in the software product itself, when SBOMs are widely distributed this will lead to two serious problems: 1) suppliers will be overwhelmed with support calls about component vulnerabilities that aren’t exploitable – i.e. they’re false positives – and 2) software users will waste large amounts of time scanning for, and generally worrying about, those same component vulnerabilities.

So the VEX is almost the exact opposite of the “traditional” vulnerability notification described above. Instead of saying “CVE 123 is exploitable in Product A Version X”, the VEX says “CVE 123 is not exploitable in Product A Version X”. In other words, “Don’t worry about CVE 123, at least as far as our product is concerned, even though it does apply to one of the components of our product.”

The message conveyed by a VEX is important, but just as important is the fact that the VEX is machine-readable, like an SBOM is. A single software product has on average 135 components, but can in fact have in the thousands. Since it’s likely the majority of vulnerabilities identified for those components won’t in fact be exploitable in the product itself, there will likely need to be many more VEXes issued than SBOMs.

When an organization receives an SBOM from a software supplier, they will process it through a tool that will look up current vulnerabilities for each of the components (there will also be services available in the near future, that will do this for the user). This will likely produce a big list of vulnerabilities, but before anyone panics after seeing this list, the VEXes will also start coming in and will remove the non-exploitable vulnerabilities from that list. The list will be drastically pared down. So suppliers won’t be overwhelmed with support calls for phantom vulnerabilities in their products, and users won’t lose sleep over those same phantom vulnerabilities.

However, when I wrote this post two months ago, I had begun to realize that the fact that VEXes are machine-readable means there are many more possibilities for messages that can be conveyed. Currently, the only uses that have been discussed for VEX are a) a notice that a vulnerability is exploitable in a particular product and version (i.e. a “traditional” vulnerability notification, although in machine-readable format), and b) a negative notice that a vulnerability isn’t exploitable in a particular product and version. However, there are a number of other notifications that can be sent out using VEXes, which now have to be handled as support calls, or at least by the user going to a website to try to get the answer for themselves.

I illustrated one of these notifications in the first post, although I now realize I could have gone a lot further than I did. Here’s the full scenario:

1.      The “traditional” positive vulnerability notification says for example “CVE 123 is exploitable in product A version 3.2.”

2.      Suppose a user is actually on an earlier version, say 2.8. Normally, they would need to contact the supplier – or at least look at their web site – to answer the set of questions, “What about version 2.8? Is CVE 123 also exploitable in that version? And if it is, have you patched it? Or do I need to upgrade in order to fix the vulnerability?”

3.      Suppose the supplier wants to be proactive in answering these questions, since they can certainly be anticipated. Moreover, they want to answer all of these questions for all of the versions of their product – and they want to do it in a single machine-readable document, not a whole set of documents as well as a bunch of support calls. This way, any user with these questions, who is running any previous version of the product, will simply be able to check in their own configuration or vulnerability management system. They’ll be able to find out immediately whether they need to apply a patch to their version, upgrade the version to fix the vulnerability, or do nothing.

Is it possible to do all of this using a single document?

You may have guessed that the answer is yes. And what’s the name of that document? You guessed right again – it’s the VEX! Here is what the VEX would say, although in more human-readable form:

A.     CVE A is exploitable in versions 2.5, 2.8-3.0, and 3.2 of Product A.

B.     It is not exploitable in versions before 2.5, as well as versions 2.6-2.8 and 3.0.

C.      Patch JKL has been issued for the current version 3.2. Please apply this as soon as possible, if you’re on the current version.

D.     If you’re not sure whether patch JKL has already been applied, check the version you’re running now. If it’s 3.25, the patch has already been applied. If it’s 3.2, you need to apply it.

E.      Patch MNO has been issued for versions 2.9 and 3.0. Please apply this as soon as possible, if you have either of those versions.

F.      If you’re not sure whether patch MNO has been applied yet, check the running version number. If it’s 2.95 or 3.05, patch MNO has been applied. If it’s 2.9 or 3.0, it still needs to be applied.

G.     There is no patch for versions 2.5 and 2.8. Please upgrade to version 3.2 as soon as possible.

H.     No action is required for all versions before 2.5.

A supplier could make all of these notifications in a single VEX, which would have fewer than ten statements. In other words, with this one VEX, the supplier would have answered all questions that users might ask about any previous version of the product, with respect to CVE 123. Even better, all of this is done without users having to contact the supplier, and without the supplier having to hire a bunch of new call center workers to handle the calls.

Pretty neat, huh?

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, November 4, 2021

We’re from the government, and we’re here to protect you

In a speech last Friday, CISA Director Jen Easterly said “her agency has kicked off an effort to identify “primary systemically important entities” to be protected from threats, often those critical to national continuity. ‘We are prototyping a variety of different approaches in our National Risk Management Center … to try and start identifying those entities that are in fact systemically important, and we are doing it based on economic centrality, network centrality, and logical dominance in the national critical functions.’” She specifically pointed to ransomware as the type of attack she’s most concerned about. Whether or not she mentioned Colonial Pipeline, you can be sure that was what was first and foremost in people’s minds.

Of course, I’m all for protecting “primary systemically important entities”. I’m also all for protecting children and small animals, Mom, the flag and apple pie. However, I’d also like to see the big money she’s evidently planning on spending do some good. And I fear that this looks like just another way for DHS to waste lots of money trying to combat imaginary threats, while the real ones aren’t even considered. A great example of that is the recent TSA pipeline security directive.

As I pointed out in a recent post, that directive requires pipeline companies to spend lots of money addressing a set of threats that seem to have been dreamed up in somebody’s Master’s thesis, but have never been seriously discussed in the real world, let alone been observed to…you know…actually happen. Meanwhile the real cause of the Colonial Pipeline outage – the fact that the loss of the billing system on the IT network required the OT network to be shut down – is nowhere even mentioned. It’s the classic “The light is better here” syndrome.

So what’s Ms. Easterly proposing? The very fact that she’s talking about “protection” of critical infrastructure industries (although she didn’t use the term “Critical Infrastructure industry”, since there are so many industries – all except dry cleaners and golf courses, I believe – that have been deemed critical in recent years. So now she seems to be talking about “really critical critical industries”. Next, it will be “really really critical critical industries”) and talking about Russia and China as the sources of the threats, makes me believe that she’s thinking about more protections against frontal assaults on critical networks. An example of that thinking is Project Einstein, which was put in place to protect government agencies from cyberattacks, especially those coming from abroad.

How did that work? I’d say perfectly. It protected government agencies from frontal assaults on their networks, especially coming from abroad. However, did it protect those agencies from cyberattacks in general? It did that too, if you don’t take into account SolarWinds, which was neither a frontal attack nor launched from abroad. Of course, it was a supply chain attack, so it came in through an unguarded back door, not the front door. And the Russians knew all about Project Einstein, so they launched and controlled the whole attack from US-based cloud providers, not servers in Moscow or St. Petersburg. Our government protectors never saw this one coming, and many of them ended up being among the biggest victims of the attack.

Then there was Kaseya. That was a supply chain attack that launched ransomware. It ultimately compromised 1500 organizations. Once again, there was no frontal assault to defend against. Just as with SolarWinds, because the poison came from a trusted supplier, the victims cheerfully drank it.

So here’s an idea: Why doesn’t CISA start focusing on the real threat of our times, which is supply chain attacks? Sure they’re doing some good work in that area now, but rather than waste their (check that, our) money adding another lock to the 17 that are already on the front door of critical infrastructure industries, why not see what they can do to mitigate (“prevent” is probably out of the question) supply chain attacks, which always come through the back door?

The fact is that the supply chain security problem is a couple orders of magnitude bigger than the standard cybersecurity problem that CISA and other cybersecurity agencies excel at solving. Just think of it: In order to really secure Company A, you have to secure every one of their suppliers; the same goes for Companies B, C and D. Why doesn’t CISA reach out to all the suppliers to critical infrastructure industries, and find out what’s the best way to help them protect themselves from being the vector for the next big supply chain attack? And then help them put in place whatever’s required.

Of course, what’s needed will probably be different for each supplier, so this can’t be accomplished with a single big effort like Project Maginot…excuse me, Project Einstein.

But unlike Project Einstein, this might actually do some good.

 Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.