Friday, December 30, 2022

“Minimum elements”, Bigfoot, and other myths

The NTIA Minimum Elements Report describes a number of practices regarding SBOMs that might qualify as “best practices" in a lot of people’s lexicons. However, it seems both NTIA and CISA are forbidden from using the superlative form of any adjective, so the report’s title uses the less imposing “minimum elements” phrase.

Despite the fact that the Report states, “The minimum elements should not be interpreted to create new federal requirements”, some software security practitioners – and not just those at federal agencies - have glommed onto them as de facto requirements they need to include in every software contract they write. And indeed, the document makes it more likely that people will make this mistake by the fact that the word “must” (underlined, no less!) is used at various places in the document.

But, dismissing the question whether or not the report really provides “binding requirements”, the more important question is whether the report in fact describes best practices, which might be suited to ask for in software contract negotiations. That is, if the Report says a supplier must do something, should you really try to force your supplier to do it? 

My answer to that question is, “Sure, if you don’t care whether the supplier tells you to go pound sand and find another supplier that can provide as functional a product, with as good support, at an equivalent price.” I say that because, in requiring your supplier to follow all of the provisions in the Minimum Elements Report, you’re really asking the supplier to commit to performing impossible tasks. Here’s one of the more egregious examples, which I know has caused a lot of suppliers heartburn.

Let’s look at the “minimum data fields” in the table on page 9 of the report. Most software security practitioners would probably assume this phrase means that each of these fields must be included in the SBOM, for every component. For three of the fields (the first, sixth and seventh), compliance is trivial, since they only require one response that applies to the entire SBOM. However, for the second through fifth fields, it’s just about certain that the supplier will be way out of compliance if you take the provision literally and require them to provide a response for every field.

Fields 2 to 5 all have to be filled in for every component in the product. Let’s look at the fifth field, Dependency Relationship. Essentially, this refers to a possible “parent/child” relationship between two components in the product (and the product itself is a component, bearing the august title of “primary component”); of course, the “child” component is itself a component of the “parent”. Let’s do some arithmetic regarding this field:

1.      The average software product includes around 150 components, although there are many products that have thousands or even tens of thousands of components.

2.      You may wonder why a supplier wouldn’t be able to provide a response in this field, for any component of their product. After all, isn’t the supplier supposed to know every component in their product? That means they should also know the relationship of every component to every other component – with the three possibilities being child, parent, and not applicable (when neither component is a parent of the other).[i]

3.      Let’s look at “Dependency Relationship”. That sounds simple, right? If a component is included in the software your organization buys, you should not only know how the component is identified (name, version string, etc.), but you should also know what relationship that component has to other components. However, keep in mind that a Software Composition Analysis (SCA) tool will list components in a product, without providing a “dependency graph” that describes their relationships. In order to truly understand all the relationships between components in their software product, a supplier needs to have an SBOM for every component.

4.      To make it easier to describe what we’re working with when it comes to component relationships, people often talk about “tiers” of components within a product. The first tier consists of components that are direct dependencies of the product itself. These are the components the supplier has themselves included in the product. The product is the parent, and each first tier component is a child.

5.      The remaining tiers – second, third, etc. – all consist of components that are children of the components in the previous tier. For example, the second tier consists of components that are children of first tier components. In fact, this isn’t an exact description, since sometimes Component A is a child of Component B, while B is also a child of A – so there’s no good way to define tiers. But, the concept does help in making high-level statements.

6.      In practice, it will usually be extremely difficult to obtain an SBOM for any component that isn’t a direct dependency of the product itself. In other words, while it will be difficult (at least currently) to obtain an SBOM for any component other than the ones in the “first tier” of components, it will be well-nigh impossible to obtain an SBOM for any component on a tier lower than the first one. The main reason why this is the case is that the supplier of the product, even though they’re the direct “customer” of every supplier of a first-tier component, has no direct relationship to any of the lower-tier[ii] suppliers.

7.      How do we estimate the number of components in the product? That’s simple: Every component can be assumed to itself have 150 components, since a component can often be a standalone product on its own. Thus, the product contains 150 X 150 components, or 22,500 in total.

8.      Now, how do we estimate the number of dependency relationships among those components, since that’s what this field is calling for? That’s also simple: Since every component has one “parent” and 150 “child” components, there will be 22,500 * 151 relationships, which is approximately 3.4 million. However, since you can’t count both the parent/child and the child/parent relationship of the same two components as different, this means we have to divide this number by two, yielding 1.7 million relationships.

9.      Thus, in order to completely fill in this field as “required” by the Minimum Elements Report, the supplier of a typical software product or intelligent device has to obtain 22,500 SBOMs and use them to identify 1.7 million dependency relationships. If we assume that obtaining each component’s SBOM (or perhaps creating it using a software composition analysis tool) takes just one hour (a wild underestimate, of course), and if we assume that identifying each relationship takes just one minute (also not at all realistic), this will require 22,500 + 28,050 hours, which equals about four years. Of course, that’s for one SBOM with 150 components, not one SBOM with 10,000 components; using rough calculations, the latter will take…a lot longer.

The moral of this story is that, if a supplier of a typical software product with 150 components commits (perhaps in a purchase agreement) to listing all Dependency Relationships of each component, they’ll be committing to do the impossible. In fact, of six “must” items that I count in the Minimum Elements Report, five of them will almost always be impossible to fulfill completely.

What this means is that, if your organization wishes to start obtaining SBOMs from your software suppliers, you will be shooting yourselves in the foot if you try to require the suppliers to comply with specific rules, including what’s in the Minimum Elements Report.

Instead, if the supplier doesn’t already have a specific program for producing and distributing SBOMs to customers, you need to have a discussion with the supplier about what they know they can do at this point. You can ask questions like,

·        How often can they provide an SBOM? If they can provide one with every major new version, that would at least be a start.

·        Do they have a rough idea of the number of minimum data fields they’ll be able to fill in? Even if they say they can only provide information on a small number of first-tier components, that’s a lot better than not providing any information on any components. Call it a win for now.

·        What format will they use for the SBOM? If they can provide it in one of the two primary machine-readable formats, that is good. However, machine-readable SBOMs, require a machine (consisting of hardware and software) to utilize them. There are literally no examples of what I call a “complete” SBOM consumption tool currently available, although there are a few open source products like Dependency-Track and Daggerboard that do at least part of what a complete tool would do. Since there are vendors that provide services that utilize SBOMs for vulnerability management purposes, you should explore what you can get from them. You would just need to have the supplier provide their SBOM (and any accompanying VEX documents) to the service vendor.

·        What fields will the supplier include in their SBOM, besides the minimum ones? An SBOM with just the seven minimum fields won’t be terribly useful; there are a number of fields that will be needed for specific use cases. You and the supplier need to discuss what specific fields they will add to the minimum ones. This white paper from Fortress Information Security (a company for which I consult) contains good suggestions on additional fields you may want to ask for.

A supplier may tell you they can’t provide SBOMs at all. Especially if you’re part of a federal agency, you shouldn’t accept such a statement. If the supplier is concerned about software security, they should be producing SBOMs for their own use, meaning they can’t plead some sort of technical challenge. And if they aren’t using SBOMs now to learn about vulnerabilities in their products, why is your organization still buying from them? There’s no real question in the software developer community nowadays that it’s essential to track component vulnerabilities using SBOMs.

I do want to point out that a number of suppliers are worried about providing SBOMs, since they think they will give away IP if they do. This probably isn’t a valid objection in most cases, but at this point, the last thing you want to do is try to force the supplier to do something they think will jeopardize their business. You should tell them they don’t need to fill in any field for which their answer might contain IP.

At this point, with few suppliers regularly distributing SBOMs to customers, the important thing is to get your supplier to distribute something, even if it consists mostly of empty fields. We can save perfection for later.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I should point out that neither the CycloneDX nor SPDX formats uses the words “parent/child” to describe component relationships. But since the two formats use different terms to describe relationships (CycloneDX takes a somewhat different approach to describing these relationships), and since most people have an intuitive understanding of “parent/child”, I’ll use those terms here. 

[ii] You may notice that there are two completely contradictory metaphors used, when discussing component relationships. One metaphor is that of ground layers, with the product itself on the top and the components found in multiple layers below the product; in this sense, the tiers that are farther from the product itself are referred to as “lower”. The other metaphor is the dependency tree, whose roots are in the ground and branches spread into the sky. The product is essentially the base of the tree, and the components are on branches which themselves branch off. Every branch is a different “layer” or “tier”. In this sense, the more distant tiers are “higher” than the product itself. 

Don’t hold me to either one of these metaphors. Just think of distance of a component from the product itself; the farther away it is, the higher the number of its tier.

Monday, December 26, 2022

Rethinking VEX


Recently, my friend Kevin Perry sent me this link to an article in CyberScoop. It gives a fairly good summary of where SBOMs currently stand: End user organizations (i.e. organizations whose primary business isn’t developing software, which even includes the business side of software developers) aren’t using SBOMs in any substantial way at all. But developers are using them very heavily to manage vulnerabilities found in components of the software they’re developing (the person who wrote the article didn’t know how successful developers have been in using them, since that’s not commonly known. However, it’s very possible that the majority of software developers still don’t utilize SBOMs to secure their products at all).

I’ve compared current SBOM utilization to a string which is lying flat on a table. Many developers, seeing how useful SBOMs have been for them, would like to distribute them to their end users, but the end users aren’t asking for them because they have no idea what they’ll do with SBOMs if they receive them. The developers are pushing on their end of the string, but without the end users pulling on their end, the string is going nowhere.

What will get end users to change their minds about SBOMs, so they start seeing some real value in them? It’s not just persuading end users that SBOMs will help them secure their networks; there’s been enough written about that subject – and attention paid to it - that this can no longer be the primary obstacle for end users, or even a major one.

One real obstacle for end users is the lack of easy-to-use and low cost tools and third party services to “process” SBOMs for vulnerability management purposes. The small number of tools that are available today address parts of the software component vulnerability management process, but what I consider the most important part – passing vulnerability data to the vulnerability and configuration management tools that are now deployed by end users – is currently being addressed by nobody.

However, this problem isn’t a fundamental one. The tools will come when there’s clarity about how SBOMs can be successfully utilized by end users. But there are still some big obstacles standing in the way of that clarity. The biggest of these obstacles has to do with the fact that, even though there are lots of tools (including open source tools like Dependency-Track) that will look up vulnerabilities for components listed in an SBOM, around 95% of those component vulnerabilities won’t in fact be exploitable in the product itself.

This means that, if the user takes every one of the component vulnerabilities it finds seriously and contacts the supplier’s help desk to find out when they will be patched (as well as looks for those vulnerabilities in the products where they’re supposedly found), 95% of their time (and the help desk’s time) in this effort will be wasted. As Tony Turner pointed out recently, given that security staff are already overwhelmed by the real vulnerabilities their scanners find on their networks, the last thing they need to do is waste time tracking down false positives.

Of course, this problem is precisely the one that VEX documents were supposed to solve. But VEXes are currently going nowhere. No supplier is regularly distributing them to users, whereas, if they were to work as intended, oceans of VEX documents would need to be coming out daily. There are specific reasons why this isn’t happening (like a lack of guidance on how to put a VEX together in either the CSAF or CycloneDX VEX formats). Those reasons do need to be addressed, since there are still important use cases for VEX documents.

However, I now realize – after having participated in just about every NTIA and CISA VEX workgroup meeting since they began in mid-2020 – that VEX as currently conceived (and even including my real-time VEX suggestion) will never achieve the primary purpose for which it was intended: shrinking the user’s list of component vulnerabilities, so that only the 3-5% that actually are exploitable will remain. Given the huge number of VEX documents that would need to be issued in order to achieve that goal (perhaps hundreds or even thousands, for a single SBOM), and the fact that the user will never be able to be sure there aren’t still some component vulnerabilities that aren’t exploitable - but for which a VEX statement has yet to be issued - this approach is simply not a good one.[i]

But the current VEX idea, which I now think is mistaken, didn’t spring out of nowhere; there are two assumptions underlying it that I now believe to be wrong. Only by removing those assumptions will we be able to identify an effective solution to the component vulnerability exploitability problem.

The more important of the two assumptions is that this is fundamentally a user problem. By that, I mean the assumption that managing vulnerabilities due to components in a software product they utilize, is the responsibility of the end user, not the supplier. In this view, it’s up to the users to look up component vulnerabilities after they receive an SBOM, then use the VEX information to winnow them down to just the exploitable vulnerabilities.

However, I no longer believe this is fundamentally a user problem. Users need the information found in SBOMs and VEXes, because suppliers haven’t previously been doing a good job of fixing vulnerabilities due to components in their products. A 2017 study by Veracode said “A mere 52 percent of companies reported they provide security fixes to components when new security vulnerabilities are discovered. Despite an average of 71 vulnerabilities per application introduced through the use of third-party components, only 23 percent reported testing for vulnerabilities in components at every release.” One hopes that this situation has improved in five years, but it’s certainly possible that not even a bare majority of suppliers are doing a good job in this regard now.

Who introduced the components into the products? It wasn’t the users. Yes, the users benefit from lower software costs and improved functionality, achieved through use of third-party components. But the suppliers certainly reap the lion’s share of the benefits of using components. Frankly, they should provide the lion’s share of the effort required to identify and fix exploitable vulnerabilities in those components, both before the software is distributed to customers and afterwards.

Moreover, the supplier’s costs for identifying component vulnerabilities should be close to zero, since they’re already identifying them now. And if they’re not, shame on them – they should be. By instead putting the burden of this on the user, the suppliers are essentially saying, “It’s more efficient to have each of our customers, even if there are thousands of them, perform exactly the same analysis that we’re already performing, than it is for us just to distribute the information we already have.”

Think about it: If done right, each user’s effort to identify component vulnerabilities in a product will yield exactly the same result as would the supplier’s effort, which they should already be performing. Why is it that, if there are 10,000 customers for a product, it’s expected that they should each perform a set of steps that are the same as what the supplier is performing now? Moreover, by intention all those users will at best achieve exactly the same results as the supplier has already achieved on their own. What’s wrong with this picture?

But, as I said earlier, around 95% of the component vulnerabilities that a user or supplier identifies (by searching for the component’s name in a vulnerability database like the NVD) will be false positives. VEX documents are supposed to fix that problem, since they’re intended to let the user (or specifically their component vulnerability management tool, no examples of which exist in complete form at the current time) separate the 5% wheat from the 95% chaff.

However, beside the fact that it’s highly unlikely that VEX documents will ever be produced in the volume that’s required to accomplish that purpose, the bigger question is why there needs to be a document at all. After all, it’s the supplier who knows how their software is put together and in most cases needs to make the judgment whether or not a particular vulnerability is exploitable in their product. If the supplier takes responsibility for identifying component vulnerabilities in their products, they can also easily take responsibility for removing the 95% of component vulnerabilities that aren’t exploitable, leaving just the exploitable 5%. Since the exploitability information will never have to leave the supplier (except to go to a service provider who may be performing this service on behalf of the supplier), it can be communicated using any secure internal method.

In other words, it would be far more efficient, and immensely more effective, if the supplier looks up the component vulnerabilities themselves (or outsources this effort to a third party services provider) and then removes the 95% that aren’t exploitable. Then, they would provide this machine-readable list of exploitable vulnerabilities to their customers. The customers could search diligently for every vulnerability on the list, knowing that – to the supplier’s current knowledge – every one of them is a real vulnerability in the product.

Note 12/26: When I put this post up on LinkedIn today, Steve Springett quickly pointed out that the VDR report described in the revised version of NIST 800-161 also lists exploitable vulnerabilities in a product. So why do we need another document that does that? I realized I’d been thinking the user would naturally assume that, when they receive the document I just described, it means that any other vulnerabilities they find for a component of the product are not exploitable.

However, I now realize that the user probably won’t, and moreover they shouldn’t, assume that all other component vulnerabilities they find aren’t exploitable; therefore, the document should list both exploitable and non-exploitable component vulnerabilities. For one thing, the document I describe will be produced at a certain day and time. Even if it were true at that time that all other component vulnerabilities in for example the NVD weren’t exploitable, it’s inevitable that more component vulnerabilities will appear later on.

The report will have to be regularly updated, but it’s inevitable that there will be periods of time in which there are more vulnerabilities applicable to components than the ones described. If the non-exploitable vulnerabilities aren’t also listed in the report, the user will be led to believe that any other component vulnerability they discover isn’t exploitable, when in fact that may not be the case.

Of course, anyone who knows anything about cybersecurity will now scream out, “Are you crazy? You want to have each supplier send out documents that list each exploitable component vulnerability in each of their products to every one of their customers? How many nanoseconds do you think it will take before each of these lists shows up on Vladimir Putin’s desk?”

And here’s where the second incorrect assumption appears: It’s the assumption that the only good way to provide this information to customers is to send out a document, even a machine-readable one. Instead, I think this information needs to be provided in a strongly authenticated, machine-to-machine transaction that is initiated by the customer, not the supplier. Moreover, there needs to be strong encryption of the transaction.

And if that isn’t good enough, there are certainly ways to add security to the transaction. For example, the supplier might send – through a different encrypted, customer initiated transaction, perhaps to a different device on the customer end – a list of the component vulnerabilities that can currently be found in the NVD (or other vulnerability database, like OSS Index) for the version of a product that is described in a recent SBOM. This list would consist of the CVE identifier for each vulnerability, as well as a unique short identifier for the vulnerability, consisting of randomly selected alphanumerics.

The list of exploitable vulnerabilities would be sent separately from the above list. It wouldn’t show the actual CVEs, but just the short identifiers for them, along with their status (exploitable or not); of course, this list could still be encrypted. And if that isn’t enough security, I’m sure there are more security layers that could be placed on top of that, when required by the nature or use of the software product.

There’s a lot more that can be said about this proposal, as well as many loose ends that will need to be tied up. I’d appreciate hearing anybody’s comments on this. This idea will be discussed in a more participatory context at some point. In the meantime, I’m not saying we need to drop the idea of VEX documents or of real-time VEX, since there are use cases for both types of notifications.

But I don’t want to play down the magnitude of the problem I’m trying to address. Without at least partial resolution of the vulnerability exploitability problem (whether it’s VEX or something else), SBOMs will never be widely (or probably even narrowly) used.

Period.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I still believe my real-time VEX idea – which will be submitted as part of a standards application to the IETF next year – is the best way to handle VEX as currently conceived. It will be much easier on both suppliers and the user’s consumption tools if “not affected” status notifications can be provided through an API, rather than by preparing and distributing a machine-readable document.

Friday, December 16, 2022

A new cop on the IoT security beat


In October, I posted about the fact that my client Red Alert Labs had become certified as an assessor by the ioXt Alliance, the global standard for IoT security; they’re only the eighth organization to receive this honor. Now, the Alliance has put out a press release to announce this fact:

NEWPORT BEACH, Calif. — Dec. 15, 2022 — The ioXt Alliance, the global standard for IoT security, today announced the addition of Red Alert Labs, a Europe-based Cybersecurity Lab specialized in IoT, to the ioXt Authorized Labs certification program. Authorized labs are the exclusive test providers for the ioXt Alliance and perform all testing required for devices to be certified by ioXt and to bear the ioXt SmartCert label, which provides security assurance to consumers and enterprises.

Red Alert Labs (RAL) is an IoT security provider helping organizations trust IoT solutions throughout their lifecycle. RAL provides comprehensive IoT security by design, risk management, consulting, audit and certification services supported by automated processes. RAL provides assessments and certifications of connected devices based on multiple standards, including IEC 62443, Common Criteria, ETSI 303 645, and NIST 8425. RAL is also involved with the European Union Agency for Cybersecurity (ENISA) to develop the EUCC scheme for ICT products and EUCS scheme for cloud services in the context of the Cybersecurity Act in Europe.

Ayman Khalil, managing partner and COO of Red Alert Labs, said, “Given our experience performing IoT device evaluations and certifications for various standards like ETSI 303 645, we are quite pleased to be working with ioXt Alliance, both for SmartCert certifications and for the upcoming U.S. IoT device security labeling program. IoXt is working closely with NIST, in accordance with the executive order given by the White House, in supporting the development of that program.”

“Authorized labs are important organizations in the ioXt Alliance as they provide ioXt certification testing to ensure devices are secure for consumers and businesses to use,” said Jan Bondoc, vice president of information technology at the ioXt Alliance. “We’re very pleased to welcome Red Alert Labs as an Authorized Labs partner to work with us to advance security in the IoT industry.”

With profile creation by top-tier companies in technology and device manufacturing, the ioXt Alliance is the only industry-led, global IoT device security and certification program in the world. Devices with the ioXt SmartCert label give consumers and retailers greater confidence in a highly connected world.

ioXt certification includes both security controls implemented in a connected device and the manufacturer’s security practices. An example of the former is whether security updates are applied automatically when possible. An example of the latter is whether the manufacturer published a policy to notify customers when support will end for their product.

Besides assessing and certifying connected devices and their manufacturers, RAL helps end-user organizations assess the cybersecurity risks they face from devices they are considering for procurement. After procurement, RAL helps those organizations assess and mitigate security issues identified in devices they use. For example, RAL will soon provide services based on the NIST.IR 8425 cybersecurity framework for connected devices, developed by the U.S. National Institute of Standards and Technology (NIST).

About the ioXt Alliance

The ioXt Alliance is the Global Standard for IoT Security. Founded by leading technology and product manufacturing firms, ioXt is the only industry led, global IoT product security and certification program in the world. Products with the ioXt SmartCert give consumers and retailers greater confidence in a highly connected world. Learn more at ioxtalliance.org.

About Red Alert Labs

Red Alert Labs is an IoT security provider helping organizations trust IoT solutions. An independent cybersecurity lab with a disruptive business offer to solve the technical and commercial challenges in IoT. Its expertise has been recognized by numerous awards. Red Alert Labs is a valued member of IoXt Alliance, EUROSMART, IoTSF, CCC, ACN, SYSTEMATIC, CEN-CENELEC, and ECSO.

I’ve been working with Red Alert Labs for a year and a half, and I can attest that they’re a high quality organization. Note they work with both IoT device manufacturers and end users. In fact, I and Isaac Dangana of RAL wrote an article that was published this summer, on why IoT manufacturers need to follow different practices with respect to SBOMs and software component vulnerability management than suppliers of “stand-alone” software. Intelligent devices (especially medical devices) introduce a lot of unique security concerns. Given the rate at which devices are proliferating, there’s lots of work to be done!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Wednesday, December 14, 2022

Face it: SBOMs will never be regulated into use


There haven’t been a huge number of startup companies providing services and/or tools to the SBOM community (at least I’ve heard of fewer than I expected to by now, although I’m certainly not in a position to learn about all or even most of them).  However, I understand from what a lot of these entrepreneurs have said that they’re expecting to get a big push from regulations, in some way.

What are these regulations? Some of the ideas I’ve heard are:

1.      Some entrepreneurs think they’re helping software suppliers comply with “NTIA regulations” for SBOMs. However, the NTIA doesn’t do regulations, period. Never has, never will. NTIA’s goal is to get members of an industry together to discuss guidelines that will enable a new technology to take off (one technology they did this for – without writing a single regulation – was DNS, for which NTIA was the first domain registrar. The last time I checked, perhaps five seconds ago, DNS is up and running quite well, thank you). The NTIA Software Component Transparency Initiative ran from 2018 through the end of 2021, and it produced a number of documents of mostly good quality. However, they were produced by different groups at different times, and they focused almost entirely on the needs of software and intelligent device suppliers (which isn’t surprising, given that over 95% of the participants, as far as I could see, were from that side of the industry). The Initiative was quite successful in spurring production and use of SBOMs by suppliers for their own component vulnerability management purposes, but everyone agrees the focus now has to be on the consumer side – i.e. the 99+ percent of organizations (public and private) whose primary business isn’t developing software but…you know, using software to provide insurance, build cars, provide cybersecurity consulting services, etc. That is, the rest of us.

2.      Other entrepreneurs have pointed to the SBOM provisions in Section 4(e) of Executive Order 14028. Those entrepreneurs say they’re helping federal agencies comply with the EO (although they’re all sure that whatever they do for the agencies will end up being demanded by private industry as well). However, all the EO requires is that agencies start asking for an SBOM from each supplier of “critical software”. The supplier could tell them to go pound sand, but the agency would still be in compliance.

3.      The EO called for NIST (really the NTIA) to develop a list of “minimum elements” for an SBOM. The NTIA did that, but most of the statements in that document are written as “may” or “should”. There are some that use “must”, including this one: “..all top-level dependencies must be listed with enough detail to seek out the transitive dependencies recursively..” This means that all “top-level” components in an SBOM need to have detail on each of their components. But guess what? I doubt any supplier will do this for a while. For example:

a.      There are some products with thousands or tens of thousands of “top-level” components (and BTW, the concept of levels doesn’t have any formal meaning in an SBOM, since Component A can be a dependency of Component B, while at the same time Component B can be a dependency of Component A in the same product. However, there’s a general understanding of what it means). Let’s assume a product contains 1,000 top-level components (dependencies). Each of those will normally have at least 140 sub-components (which is what’s meant by transitive dependencies, although that term applies to all the levels “below” sub-component). This means the supplier of this product will have to contact 140,000 suppliers of sub-components (90% of which will be open source projects, from which the supplier will usually not get any response) to ask for an SBOM, for just one of the supplier’s products. How many of these “second-tier” SBOMs are they likely to get? My guess is they’ll be lucky to get five. So that means our luckless supplier will have 139,995 instances of non-compliance. Talk about a bad day!

b.      For probably at least 95% of the components the supplier lists in their SBOM, the component name originally produced by the tool that created the SBOM won’t be searchable in any major vulnerability database, meaning the supplier will have to use AI, fuzzy logic and other tricks to find it – and in many cases, they may never find the component in a vulnerability database. This is the famous “naming problem”. Thus, just producing a single usable SBOM could be a major project, again for a not-unusual product with hundreds or thousands of just “top-level” components.

4.      The EO required NIST to develop “guidelines” for SBOM production and use by February 6 of this year. NIST did put out a document on that date, but it mainly referred to other documents, produced by NIST and others. There was no set of guidelines in NIST’s document. This wasn’t a surprise to people who understand that NIST doesn’t produce guidelines (let alone regulations) for anything having to do with cybersecurity. NIST – very properly, in my opinion – understands that cybersecurity is a risk management exercise. 

All NIST’s cybersecurity documents provide a framework for managing a particular type of cyber risk, but none of them have ever prescribed certain actions – nor will they ever do that. Sure enough, the documents NIST put out early this year – SP800-161r1 and NIST800-218 (aka the Secure Software Development Framework) - had nice things to say about how SBOMs fit into their framework, but nothing else in terms of prescriptive requirements, or even guidance. I’ve just learned that NIST asked to be removed from EO 14028 compliance responsibilities earlier this year, so perhaps they found this episode a little too traumatizing for their taste.

5.      The most recent dashed regulation hope was the US House version of the National Defense Authorization Act (NDAA) of 2023. A stiff requirement that software suppliers to the US armed services provide SBOMs was removed last week, right before the bill was passed by the House (the bill now needs to be passed by the Senate, which of course is very unlikely to add SBOMs back into it).

There was one more hope of the regulation buffs: The Office of Management and Budget (OMB), which was charged with implementing the EO, had set August 10, 2022 as the date when federal agencies would need to start requesting SBOMs from their suppliers. That date passed without any formal guidance being issued, but “guidance” did come on September 14, in the form of Memorandum M-22-18. Surely, that document set out some concrete requirements for SBOMs, right?

‘Fraid not. The OMB simply said:

A Software Bill of Materials (SBOMs) may be required by the agency in solicitation requirements, based on the criticality of the software as defined in M-21-30, or as determined by the agency. (my emphasis)

 It followed that up by saying:

SBOMs must be generated in one of the data formats defined in the National Telecommunications and Information Administration (NTIA) report “The Minimum Elements for a Software Bill of Materials (SBOM)”. (since just above every SBOM I’ve seen in the past couple of years is in either SPDX or CycloneDX format, it’s clear the market is already requiring this. No government statement was needed).

In other words, any entrepreneur who believes that existing regulations, or coming regulations, will make them rich in the SBOM “marketplace” had better recalibrate their business plan, so it doesn’t depend on regulations at all.

But guess what? I’m quite optimistic that it won’t be hard for these entrepreneurs to recalibrate their plans, since I think an initial wave of SBOM use – which I’ll admit I thought, until this spring, would occur when Section 4(e) of the EO became mandatory on August 10 – is definitely coming. But I’d say the ETA for this initial wave is 2-4 years from now (although I’m sure that at least a few major suppliers will start producing SBOMs with some regularity next year).

My optimism is mainly due to one piece of evidence: In March, Steve Springett, leader of the team that created and maintains the Dependency-Track open source SBOM analysis tool (which is now ten years old) as well as co-leader of the CycloneDX team, learned to his surprise that DT was being used 202 million times a month to look up the components in an SBOM in the OSS Index open source software vulnerability database. Moreover, in September he found out that number had jumped to 270 million (he estimates it’s at least 300 million now, which I don’t doubt). This amounts to at least a 50% annual growth rate. In case you don’t know, that’s a lot (and this isn’t individual component lookups – this is just looking up all the components in one SBOM. If you assume that the average SBOM has 100 components - a low estimate - this means there are now 30 billion monthly queries to OSS Index, which are produced by just one software tool (there are a number of other tools that do what DT does).

Dependency-Track has always been aimed primarily at software developers, and while Steve Springett is sure that at least some non-developer software consuming organizations (i.e., any organization whose primary purpose isn’t developing software) are using it to manage component vulnerabilities in third-party software that they operate, he’s also sure there aren’t many such cases.

However, given the rate at which developers are increasing their use of Dependency-Track, it seems to me inevitable that their customers (i.e. users) will want to get into the action as well. After all, if the software gets attacked, they’ll be the ones that pay the price (and speaking of price, what do you think the SolarWinds attacks, which resulted in a number of large federal departments and agencies being compromised for many months, cost the US economy? I have no idea either, but it has to be huge – and we probably still don’t know everything that happened during the many months that the agencies were essentially owned by the Russians).

My feeling is that if Dependency-Track, a supplier tool that could fairly easily be turned into an end user tool, is experiencing such heavy – and rapidly-growing – usage now, it’s inevitable that users will soon say, “Hey, I want some of that.” They’ll start asking for SBOMs from their suppliers (which they’re not usually doing now, unless they’re subject to the EO). When some suppliers start providing SBOMs to their customers, those same customers will ask all their software suppliers to do the same. Soon, we’ll have what’s known in the military as a “self-licking ice cream cone”: a process that sustains itself by its very existence.

When that happens, do I think that SBOMs will start flowing out to end users like water on a desert? They will flow out, but mostly not to end users. I don’t know of a single complete end user tool for consuming SBOMs today - and certainly not one that’s free or low-cost. I doubt this situation will change in the next few years, either. Since I’m sure that end users will need an almost completely automated tool to perform all the steps required for software component vulnerability management, I don’t see them being at all interested in performing these steps themselves (or even stringing together a bunch of open source tools), until such an automated tool is available to them.

But this doesn’t mean that software consumers won’t want to learn about exploitable vulnerabilities in the software they depend on, or that they have to wait for a tool to do this. Rather, I think the initial 2-3 years of SBOM utilization by end-user organizations will occur mostly through engagement of third party service providers, who will provide subscription-based services for SBOM/VEX-based software component vulnerability management. Those few providers will be able to spread out the cost of developing these services over a large number of customers, just like Henry Ford was able to spread the cost of building cars over a much larger number of customers than his competitors could, by implementing efficient assembly lines.

As a matter of fact, I really don’t think that end users should even have to pay the service providers’ bills for identifying and tracking exploitable software component vulnerabilities; this should be the supplier’s responsibility. After all, they’re the primary beneficiaries of the huge increase in use of software components in the last maybe 20 years. Why shouldn’t they also be primarily responsible for mitigating the additional risk introduced by this practice? More importantly, why should every one of a supplier’s say 10,000 customers have to perform exactly the same analysis, to learn about exploitable vulnerabilities in the software they use, when the supplier should already have performed this analysis and could easily share the results with all of them? And if the supplier hasn't performed this analysis, shame on them. All the more reason for them to start performing it themselves or paying a third party to do it for them.

There’s another big advantage to having third party service providers perform this analysis: the need for the biggest use case for VEX – telling software end users that component vulnerabilities they’ve identified through having an SBOM aren’t exploitable in the product itself – goes away. The supplier and the service provider can make their own arrangements for communicating whether or not component vulnerabilities are exploitable in the product itself; they don’t need to follow any particular format to do that. Sounds good to me!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Wednesday, December 7, 2022

It seems this might be a much bigger problem...

Today, I was emailing with a reporter about my post on the North Carolina substation attacks, when I saw this article that had been linked in the Utility Dive newsletter (which I normally open as soon as it hits my inbox). It seems that NC might not have been an isolated incident after all. You should read the whole article, but IMHO the executive summary is these two paragraphs:

“Power companies in Oregon and Washington have reported physical attacks on substations using handtools, arson, firearms and metal chains possibly in response to an online call for attacks on critical infrastructure,” the memo states.

The aim, according to the memo, is “violent anti-government criminal activity.”

Another:

The department wrote that attackers would be unlikely to produce widespread, multistate outages without inside help. But its report cautioned that an attack could still do damage and cause injuries.

Of course, we’re not talking about multistate outages. A multi-day, multistate outage might be a catastrophe with loss of life, especially if there were a big city in one of those states (see Ted Koppel’s Lights Out, which very eloquently describes what would happen if there were a multistate outage that lasted more than a few days. What’s unfortunate is that Ted let someone persuade him that he should sell the book as being about the effects of a cyberattack on the grid, when in fact exactly the same results would occur, no matter what the cause. The book is an easy read and still definitely worth it, years after it came out).

But an attack that could “do damage and cause injuries” is a good description of what happened in NC. It certainly caused damage, and people were injured in car crashes, if for no other reason. We may hear later about people on oxygen at home, etc. that were victims as well. An extended power outage is always a big problem.

Finally:

The targets also present an increasing challenge to secure because attackers don’t always have to get as close as they did in North Carolina in order to do damage, Southers said. With the right rifle, skill and line of sight a sniper could take a shot from as far as 1,500 meters (about 4,900 feet) away.

That’s quite interesting. If line of sight is a problem (which it definitely was with the Metcalf attack), then that will require fairly big, expensive fences.

Unfortunately, as I told the reporter today, it will be impossible to prevent attacks like this without huge expenditures (unless there’s a good way to triage substations for degree of risk, which I’m not sure is the case here). One thing I suggested is that, since this is obviously a national problem, the feds should finally step in and pay for the mitigations themselves – rather than dump all the cost on the utilities and especially their ratepayers. This has been for the most part the practice so far, when it comes to both physical and cybersecurity, but it’s time to acknowledge this is a national problem.

P.S. After I wrote the above post, I prepared a summary of my ideas for the reporter, but she never used it. Here is what I wrote:

Physical attacks on power substations are almost impossible to prevent. The biggest reason is that substations are deliberately located as far as possible from concentrations of people like cities and towns, although that can never be completely avoided. They also have to be open to the air, since transformers generate huge quantities of heat that need to be dissipated. It’s certainly possible to have guards, walls, cameras, high-bandwidth communications, etc. at every substation. However, the cost of that would be huge and would have to be borne by the ratepayers. In some cases, like substations that serve military bases or hospitals, the cost may be justified.

What was most disturbing about the North Carolina attacks was that the attackers were able to cause a widespread, prolonged outage. Those should never occur anywhere in the US, although they’re unavoidable in huge events like hurricanes. The grid is supposed to have enough redundancy that, even if one or two substations or generating plants are taken out, nobody will lose power at all – or if they do, it will be brief and/or confined to a relatively small area. That obviously wasn’t the case with these attacks, and that will probably be the subject of the inevitable investigations by state and federal regulators.

The news that just broke about substations in Washington State and Oregon having been attacked in a similar fashion by extremist groups raises the question whether the North Carolina attacks were just the tip of the spear. If this is really a national problem, I think the federal government should step in to help power utilities create an appropriate level of physical hardening in most substations, or at least those above a certain threshold of criticality. In addition, changes may need to be made to the power system itself, to prevent any more successful attacks from causing widespread or prolonged outages.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Monday, December 5, 2022

The North Carolina substation attacks

Yesterday, I was asked by a couple of reporters how the NC attacks differ from the 2013 sniper attack on the Metcalf substation in California, and whether the NERC CIP-014 standard (which was developed as a result of that attack) was applicable to the NC substations – as well as whether it would have prevented the attacks if it was applicable. Here is my take on this situation, acknowledging there still isn’t a lot of information available on the NC attacks:

There’s a big difference between the attacks in NC on Saturday and the 2013 sniper attack on the Metcalf substation near San Jose, California:

1.      Metcalf is an important high-voltage transmission substation. The NC substations appear to be much lower voltage and were primarily for power distribution, not transmission (although a lot of substations combine transmission and distribution functions).

2.      The Metcalf attack was meticulously planned and executed by the team of snipers that carried it out, using military grade weapons. There seems to have been much less planning in the NC attacks, although there’s not enough known yet to say that for certain.

3.      While there were some short local outages after the Metcalf attack, power was quickly restored. However, since the interstate power transmission system (known as the Bulk Power System) has redundancy built into it at all levels, there was no widespread or prolonged outage at all.

4.      On the other hand, the power distribution system is very localized and has much less redundancy built into it. Thus, even though there was probably much less damage to equipment in NC, the fact that the distribution system was damaged led to a widespread and continued outage, since there wasn’t enough redundancy to prevent this (and since it seems multiple substations were attacked, the fact that similar equipment might have been damaged in those substations may have reduced the redundancy that would otherwise have come into play).

5.      After the Metcalf attacks, federal regulators ordered rigorous (and expensive) protections for certain strategic transmission substations, including Metcalf. It’s just about certain that the NC substations were not in scope for that standard, called NERC CIP-014.

6.      However, even if the NC substations had been in scope, it’s doubtful these attacks could have been prevented, although they might have had less impact. NERC CIP-014 is designed to protect against large-scale coordinated attacks, not impulsive ones by individuals who don’t consider risk carefully before going ahead. Probably the reason that there haven’t been any attempts (that have been publicized, anyway) to build on the Metcalf attack template is that whoever planned that attack (and it had all the earmarks of just being a trial run – a proof of concept, if you will) realized that CIP-014 had turned the odds against them in general. However, a couple of average guys, who are perhaps motivated by the desire to make a point on a culture war issue, aren’t likely to carefully balance risks and benefits in this way.

Local outages happen all the time. One of the biggest causes of these is squirrels chewing on the conductors. Another important cause is thieves stealing copper. The main goal with local outages is to minimize their impact and quickly remediate them. The biggest question about the NC attacks is why these measures didn’t work. I’m sure there will be an investigation to answer that question.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Friday, December 2, 2022

The big problem with VEX


Suppose you’re a talent scout for a major league baseball team. Every year, you want to find about 50 of the best college ball players, out of approximately 60,000, to sign contracts and enter your farm team program.

You have two options for identifying those 50 players:

1.      You can go to games, talk to people, read stats, etc. – and use that research to identify the Nifty Fifty; or

2.      Instead of identifying the 50 directly, you can identify the 59,950 that you don’t want to sign a contract with. Once you’ve done that, you’ll sign contracts with the remaining 50.

Either way, you’ll end up with 50 players, and for the sake of argument, let’s assume you’ll end up with the same 50, regardless of how you identify them. Is there any advantage to choosing one of the two methods over the other or are they more or less the same?

If you think that it doesn’t matter whether the scout chooses Door Number 1 or Door Number 2, you should put on your dunce cap and sit in the corner. Door Number 2 will require a huge effort and probably a small army of scouts. On the other hand, Door Number 1 is probably something you and an assistant could handle quite easily in the course of a year. Door Number 1 is really the only choice.

Now, let’s change the field from baseball to software component vulnerabilities. Suppose you know that, on average, over 95% of software component vulnerabilities – which you identify by looking up the components found in an SBOM in a vulnerability database like the NVD or OSS Index – aren’t exploitable in the product itself. Thus, if you spend a lot of time trying to verify every component vulnerability you identify, and perhaps you harass the supplier’s help desk by asking when they’ll fix each vulnerability, on average, 95% of that time will be wasted.

Wouldn’t it be great if you had some way of knowing beforehand which were the 5% of vulnerabilities that were exploitable, so you could just focus your time on those? You would suddenly become 95% more productive. Maybe you’d be able to make it home for dinner with your family more often than you do now.

Of course, this is the problem VEX is intended to solve. So, here’s the big question: Which of the above two methods does VEX use? If you rely on VEXes to narrow down your list of component vulnerabilities to the 5% that are exploitable, are you choosing Door Number 1 or Door Number 2? The answer is…envelope, please…Door Number 2! Yes folks, instead of identifying the 5% of component vulnerabilities that are exploitable, VEX documents – if they work as intended - tell you the 95% that aren’t exploitable. You then need to remove the 95%, so you’re left with the 5%.

Is this efficient? Obviously not! But why can’t a VEX document just tell you the exploitable 5% of vulnerabilities, not the non-exploitable 95%? I doubt I have to tell you this: If a software supplier were to produce a document that lists just the exploitable vulnerabilities and send it to their customers, they might as well skip the middle man and send it directly to Uncle Vlad himself at russia.piratestate.gov. Because a supplier has to assume that any document they send out is going to end up in the wrong hands, no matter how many NDAs and blood oaths they require from their customers. That’s just the nature of documents.

You may wonder why Door Number 2 creates such a problem. If the supplier just prepared a single VEX document that listed all the non-exploitable vulnerabilities in their product and your tool (when one is available) ingested that document and removed all the non-exploitable vulnerabilities from the list for the product in question, would that be so hard? In fact, you could pick up your coffee cup and take a sip while your tool was doing this, and the tool might finish before you did. What’s wrong with that?

The problem is that the supplier isn’t likely to identify all or even most non-exploitable component vulnerabilities at one time. It may take them weeks or months to identify even most of them. Is it likely they’ll wait ‘til they’ve identified a large number of them, then send you one VEX document with all that they’ve discovered so far? Not if they want to keep their customers. The last thing they’ll want to do is delay a week or two to send out a VEX saying that 20 component vulnerabilities aren’t exploitable, then receive a bunch of angry emails from customers, telling them how much time they spent trying to find and patch those 20 vulnerabilities, and telling them where they can put their stupid software from now on.

Suppliers that value their customers’ time (and license fees) will feel compelled to send out a VEX as soon as they learn a component vulnerability isn’t exploitable. And if they’ve sent out three VEXes in the morning and they learn about another non-exploitable vulnerability in the afternoon, they’re not going to wait overnight to see if they learn about a few more the next morning, before they send another VEX. They’ll send the new VEX in the afternoon and more VEXes the next morning, if they learn about more non-exploitable vulnerabilities then.

This is why I said in this post that, once SBOMs start to be distributed widely, there will almost inevitably be “rivers of SBOMs and oceans of VEXes” – perhaps hundreds or thousands of VEXes for every SBOM that comes out. Remember, plenty of SBOMs have thousands or even tens of thousands of components. How many VEXes do you think there will need to be for every one of these SBOMs?

My “real-time VEX” API idea – which I’m happy to relate will be incorporated into a larger standard  proposal to the IETF by Steve Springett (co-leader of the OWASP CycloneDX and Dependency-Track projects) next year – is meant to mitigate this problem. It will be hugely more efficient for both suppliers and end users if an SBOM consumption tool can learn in real time whether or not a particular CVE is exploitable in a particular version of a product.

However, even with real-time VEX in place, there’s another problem that remains to be solved: Going back to our baseball analogy, suppose the scout – perhaps after having a few too many drinks one night – decides that Door Number 2 is the best way to identify the top 50 players, and they start eliminating players until there are only 50 left. Does that mean those 50 are all deserving of contracts? After all, the year these players were born (probably about 20-22 years before the current year) might have been a bad one for future baseball stars, and maybe there are only two current college players who deserve to sign a contract.

If our intrepid scout goes ahead and signs up the remaining 50 players and 48 of them turn out to be duds, do you think his scouting days might soon be over? Maybe he’ll make a career change to ball boy, if he’s lucky – or else hot dog vendor. By the same token, what will happen if the supplier decides, after they’ve put up real-time VEX notifications that 97% of vulnerabilities in a product aren’t exploitable, they put out a new notification saying the remaining 3% of component CVEs (which they’ll list) are exploitable? Yet, a month later, they have to admit that the real percentage of exploitable vulnerabilities is actually .02%, not 3.0%.  

Of course, if a customer took a chance and didn’t do anything after the original notification that said 3% of the vulnerabilities were exploitable, they may be happy, since their laziness paid off. However, if you were a customer who took the notification seriously and spent a weekend or two looking for a bunch of vulnerabilities that later turned out to be false positives, how happy would you be? Might you likely remove the supplier from your holiday card list? Or even worse?

What this problem reveals is it would be great if it were possible for a supplier to provide a truly secure notification to a customer about a set of exploitable component vulnerabilities (which of course the supplier hasn’t been able to patch yet. If they’d patched them, the vulnerabilities wouldn’t be exploitable after all). As I said, I don’t see how this would be possible if the only option for a notification were a document.  But when we’re talking about an API with authenticated access, there may be some ways to do this. In fact, I can think of several already, although it’s worth exploring a lot of options, since some will be less efficient than others.

Steve Springett (there aren’t too many posts these days in which I don’t mention his name at least a couple of times. I see no likelihood that will change very soon, thank goodness. He’s without much doubt the most creative and far-seeing person in the rapidly expanding SBOM world) said that he often gets asked on a podcast, “In ten years, where do you hope SBOMs will be?” He always answers, “I hope they won’t be needed at all.” Meaning, “The only reason we need to exchange machine-readable documents now is that we haven’t been creative enough yet, in figuring out how machines can exchange the information they need without any human intervention at all.”

The same goes for VEXes (in fact, much more so): Just as the best SBOM will be no SBOM, the best VEX document will be no VEX document. Just you wait!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.