Thursday, January 26, 2023

“NOASSERTION”? No problem!

The NTIA Minimum Elements document contains this short paragraph (on page 12, for those keeping score at home):

An SBOM should contain all primary (top level) components, with all their transitive dependencies listed. At a minimum, all top-level dependencies must be listed with enough detail to seek out the transitive dependencies recursively.

Translated into English, this says that an SBOM “must” list all the “first level” components – meaning those that are direct dependencies of the product itself (although remember, this document is just supposed to contain suggested guidelines[i] for the federal government. Exactly what status the word “must” has in that context is debatable).

However, if we can now step into the real world, let me ask how many readers are currently receiving SBOMs from even one of their software or intelligent device suppliers, which list every one of the “first level” components (I put that phrase in quotes, since technically there’s no such thing as levels in an SBOM, but in practice everybody talks about them, including me)? And, even if the supplier swears on a stack of Bibles that every first-level component is listed, are you sure they’re telling the truth?

I didn’t think I’d see any hands, although I’ll point out that’s a trick question. After all, how many users are going to audit their suppliers to make sure they’re including all the “first-level” components in an SBOM? That’s right, nobody is – nor would it make any sense to do that. The only way you could audit the supplier would be to ask them to produce another SBOM – and then you’d have to audit that one, and then…turtles, all the way down.

But the biggest problem with this suggestion (or whatever it is), is that it’s likely to prevent many suppliers from producing SBOMs for their customers at all. After all there are some perfectly good reasons why the suppliers won’t be able to “comply” with this suggestion. Perhaps the three most important are:

First, the contents of the SBOM – including the number and identities of components - will vary depending on when the SBOM is produced: whether it comes from the source code, whether it was created during the build process, whether it was created with the final build, whether it was developed from the actual binary files distributed with the software, etc.

Because components are added, replaced or removed at each stage – and since the components in the SBOM on one day will often be different from the components you’d find the next day, if the software were still being built – the only way the suggestion could possibly have any rigorous (i.e. auditable) meaning would be if it specified exactly the circumstances under which the SBOM was created. And, if there were some way to come back and recreate those exact circumstances later during an audit, which of course there isn’t.

Second, components are notoriously hard to identify with any certainty, because of the naming problem. For vulnerability management purposes, it does the software user no good to learn that a component is found in a product they use, but at the same time not know of an identifier with which they can find that component in a vulnerability database like the NVD.

If the supplier decides to leave that component out of the SBOM (or more correctly, leave it in, but just enter “NOASSERTION” for each field describing the component. NOASSERTION is the SPDX term meaning “I have nothing to say about this field for this component”. In CycloneDX, it’s just a blank line), IMO that is perfectly understandable, and shouldn’t be held against them.

Finally, there are some suppliers (I know auto industry suppliers are a good example of this) who regard the identities of some components (even open source components) in their products as their IP, which would be jeopardized if they revealed it.

Are they correct in holding this belief? Who knows, other than the suppliers themselves?  But here’s the punch line: Would you rather receive “incomplete” SBOMs from a few suppliers or receive exactly what you’re receiving currently - zero SBOMs from zero suppliers? If you prefer the latter, then I recommend you put ironclad language in your contracts requiring complete SBOMs, as described in the Minimum Elements. And give yourself the title, “Director of SBOM Prevention”.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at

[i] From the Executive Summary: “…this report defines the scope of how to think about minimum elements…” This doesn’t sound like “must” to me. How about you?

Monday, January 23, 2023

The news from NERC and CycloneDX

In 2019, the NERC Supply Chain Working Group published six guidelines on supply chain cybersecurity for systems used for the reliable operation of the North American Bulk Electric System (BES). The papers were developed by separate working groups. I led two of those groups, which produced two of the guidelines.

Last year, we updated (or started to update) all six guidelines, and I led the groups that updated both of those documents. I’m pleased to announce that one of the two documents, Supply Chain Cyber Security Risk Management Lifecycle, was just published. The other one, Vendor Risk Management Lifecycle, is finished, but has to wait another three months before it’s officially approved. In the meantime, I will be glad to send anyone who wants to see it the final draft of the document; just email me.

Two other guidelines were just published: Supply Chain Secure Equipment Delivery, led by Wally Magda of WallyDotBiz LLC and Risk Considerations for Open Source Software, led by George Masters of Schweitzer Engineering Labs. I want to point out that George is a real master of the subject of securing open source software. I know the guidelines I led are applicable to many industries, not just electric power; I’m sure this applies to the other two documents as well. So you don’t have to work for say Duke Energy to find these helpful. I can assure you a lot of work went into them!

I also want to point out that Steve Springett, Chair of the CycloneDX Core Working Group and one of the most creative (and impactful) people in the world of software supply chain security (including SBOMs, but in no ways limited to them!), will be presenting a webinar on February 1 titled Understanding and Using the CycloneDX SBOM Standard. There’s a lot going on with CycloneDX nowadays, including support for both VEX and VDR (Vulnerability Disclosure Report) – with a new version of the format coming out very soon. I’m looking forward to the webinar!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at

Friday, January 13, 2023

SBOMs for patched software versions

One of the fundamental tenets of the SBOM faith is that the developer must produce a new SBOM whenever anything in the code of the product has changed; even if the change is just a patch being applied, there needs to be a new SBOM. I myself have repeated this teaching many times. Of course, it’s also enshrined in the Minimum Elements document.

However, a recent conversation with the Director of Project Security of one of the largest software developers in the US has changed my thinking. There’s something special about patches that makes a new SBOM almost impossible in most cases. The only real solution to the problem is for the developer to do a complete version upgrade whenever there’s been a new patch issued (or even better, they don’t issue a patch; they just require all users to upgrade. But that would mean a lot more work for both the developer and their customers).

Of course, given that nowadays there are very few developers that are issuing a new SBOM with every major version upgrade, let alone with every patch, this isn’t currently a problem. When it becomes a problem, I’ll just count it as an indication that SBOMs have truly arrived – somewhat like the “problem” that too many wealth managers will call you if you just won the lottery. We should all have such problems.

Here is what my friend said:

1.      Typically, a developer will issue multiple patches between version upgrades. The patches usually aren’t cumulative. This means that, if there have been five patches since the last upgrade, applying the fifth patch won’t also get you the previous four patches. If you want all five, you need to apply each one. Of course, when the next version upgrade comes out, you’ll receive all five patches “baked in” to the upgrade. But it’s certainly not a good idea not to patch and to just wait for the next upgrade!

2.      If the developer wants to issue a new SBOM after say the fourth patch, they need to make an assumption about which of the previous three patches were applied by the user. Of course, assuming those patches weren’t automatically applied, it’s certain that different combinations of the previous three patches may have been applied by different customers (and some customers might not have applied any of them); moreover, it would be a huge waste of time for the developer to poll their customers whenever they release a new patch, to find out which of the previous patches they’ve applied.

3.      What should the developer assume when preparing the SBOM? The only thing they can assume is that all possible combinations of the three previous patches have been applied by at least one of their customers. So, completeness requires preparing an SBOM for each possible combination of patches that might have been applied.

4.      The number of SBOMs needed is equal to 2 raised to the number of independent patches that have been issued since the last update. If there have been three patches, this means the supplier needs to prepare 8 SBOMs (2 raised to the third power).

5.      Eight SBOMs might sound doable to some people. But how many SBOM’s does the developer need to prepare if there have been five previous patches? That’s 32 SBOMs. How about ten previous patches? That’s 1,024. When you consider that the supplier would have to prepare multiple SBOMs with almost every patch they released, trying to do this would create a major burden on the supplier.

Clearly, the developer can’t be expected to produce a new SBOM whenever they release a patch, except perhaps an SBOM that assumes all previous patches were applied. So, please don’t try to force your developer to follow SBOM dogma and produce a new SBOM whenever the software changes. To start, count yourself quite lucky if the developer promises to provide one SBOM with every major version upgrade. Just that will be a lot better than the number of SBOMs you receive now, which is most likely zero.

Note: The original version of this post listed much larger numbers of required SBOMs, because of a math mistake. However, the numbers above are unacceptable as they are now; it doesn't matter that they're less than before.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at

Wednesday, January 4, 2023

The big problem with securing devices

Starting in 2017, the FDA has said they intend to require SBOMs for medical devices. Early this year, they released a draft requirement for comment, saying they planned to finalize a requirement and make it mandatory by the end of the year.

However, a few weeks ago, I heard in one of the bi-weekly meetings for the Healthcare SBOM Proof of Concept (which is now conducted under the auspices of the Healthcare ISAC) that the FDA had just announced they were going to postpone enforcement of the requirement until late next year (and moreover, that they weren’t going to allow any changes to the draft requirements, even though they…ahem!...left a lot to be desired). Of course, this was disappointing, since there was a lot of agreement (including among medical device makers) that it’s important to have the SBOM requirement.

But, in the CISA SBOM Onramps and Adoption workgroup meeting this week, Josh Corman announced that the Patch Act had been passed last week in the year-end rush of legislation. This is a piece of legislation that requires the FDA to mandate cyber protections for medical devices. It had initially shown promise of being passed, but was withdrawn without a vote earlier this year. The FDA has 30 days to complete the rulemaking for this (and I hope they’ll change what they proposed last April), which seems to indicate it’s on a fast track to implementation. While the Act doesn’t specifically mandate SBOMs, it lists them as one of the measures that might be required, and the word is they will be required.

This is important for two reasons. One is that this will be the first mandatory requirement for SBOMs anywhere in the world (at least as far as I know). Of course, nobody is advocating such a mandatory requirement for every software product and device in every industry, but there are certain cases where mandatory SBOMs are justified. I would say that medical devices, which literally keep people alive in hospitals, are one of the best of those cases.

However, in my opinion the second reason is even more compelling: It’s that intelligent devices in general, not just medical devices, pose a unique cybersecurity challenge; SBOMs can play a key role in addressing that challenge.

I had never stumbled upon that challenge before I and Isaac Dangana, who works for my client Red Alert Labs (a company based in Paris that focuses on security and compliance for IoT devices), published this article in the Journal of Innovation last summer. While that article was primarily intended to be an introduction to SBOMs for people in the IoT and IIoT worlds who weren’t familiar with the concept, Isaac and I ended up pointing to a serious issue that affects the ability of a user to secure an IoT or IIoT device, or even to learn about vulnerabilities that may affect it. In fact, we ended up making the case – without intending to – that IoT and IIoT devices may currently be less secure, and are certainly much less transparent to users, than are “user-managed” software products (i.e. standard software running on Intel-standard servers).

The problem begins with the fact that IoT and IIoT devices are intentionally sealed. The results of this include the fact that users can’t identify by themselves the software products installed in the device, and they can’t update or patch any of the individual software packages in the device.

Instead, all the software in the device is treated as a single “lump” (to quote a friend who manages cybersecurity for the thousands of medical devices installed at a large hospital chain). The device comes with all the required software installed, and patches or updates to individual software products in the device are held until the next scheduled device update – at which point the entire “lump” is replaced as a unit, often at night via an internet connection.

Of course, for a network administrator who runs him or herself ragged, trying to keep up with vulnerabilities found in the hodgepodge of software products running on the servers they’re responsible for - as well as trying to apply the never-ending stream of patches to software products installed on those servers - the idea that all responsibility for vulnerability management and patching would be completely taken out of their hands and assigned to some gremlins that expect neither pay nor recognition must seem like heaven on earth.

However, this is only heaven on earth if the device user has complete confidence that the device manufacturer will:

a)      Monitor each software product installed in the device for vulnerabilities and “coordinate with” (i.e. bug the hell out of) the software product’s supplier to make sure they expeditiously patch any serious vulnerabilities in it.

b)     Require the supplier of at least every “top level” software product installed in the device to provide an SBOM for their software and update the SBOM whenever they release an update or patch for that software.

c)      Using the SBOMs, track components in each of the software products installed in the device and identify vulnerabilities applicable to each component.

d)     Require each software supplier to provide VEX notifications, which will let them know about component vulnerabilities that aren’t in fact exploitable in the product and prevent their wasting a lot of time chasing down non-existent vulnerabilities.

e)     Whenever a serious vulnerability can’t be patched immediately, provide a notification to all the customers of the device regarding mitigations they should apply to make it less likely that the vulnerability will be used to compromise the device.

f)       If one of the software products installed in the device has proved to be a “problem child”, with multiple longstanding vulnerabilities and no commitment by the software’s supplier to fix those in anything less than the remaining life of the universe, commit to replacing that software within say three months.

g)      When a software product has reached end of life status, commit to replacing it within say six months.

h)     When an open source software product hasn’t been patched or updated for say six months, admit that the product is effectively in end of life status and commit to replacing it within six months - although sooner if there are one or more serious unpatched vulnerabilities in the product.

i)       Require that the supplier of each software product installed in the device either attest they have followed every provision of the NIST Secure Software Development Framework (SSDF), or identify any provisions they haven’t followed and provide an explanation of why they haven’t done so.

j)       Etc., etc.

Of course, the device manufacturer is likely to complain lustily that they can’t stay in business if they’re really forced to do all of the above, with respect to every software or firmware product that’s installed in their device. And believe it or not, I agree that this is too much to ask of them.

However, by the same token, I say it’s too much to ask of their customers that they stay in Fat, Dumb and Happy mode, in total ignorance of any security measures that the device manufacturer took or didn’t take to secure the software within their device. If the manufacturer isn’t willing to do literally everything possible to secure the device (i.e., all of the items listed above), they should at least give their customers the information they need to learn about the measures they have taken, and express willingness to have a discussion with any customer who thinks there are measures the manufacturer should take but hasn’t.

What is the information the end user should have about the device? First, they need at least a “one-level” SBOM, showing every software or firmware product installed directly in the device, along with whatever identifiers are required to look up the software in the NVD or another vulnerability database like OSS Index. If the customer has that much, they’ll be at the same point that most users of “user-managed” software find themselves at, with respect to the software installed on the servers they operate: They’ll know what software products are directly installed on each device (or on each server, in the case of user-managed software), and they’ll be able to investigate vulnerabilities found in those products.

But is that enough? After all, any server administrator today can identify every software product that’s installed on their servers using a tool like PowerShell or even Windows™ Explorer; then they can learn about vulnerabilities in those products by looking them up in the NVD or another vulnerability database. But they won’t learn about vulnerabilities caused by components of those software products if they don’t also have an SBOM for each product; then they can look up each component in the NVD, to learn about its vulnerabilities.

How will the server administrator get an SBOM for one of the user-managed software products? They’ll probably ask the software’s supplier for it, and they’re likely to get it, too (at least one SBOM, anyway). Why is the product’s supplier likely to give it to them? Because they’re a customer, of course.

However, if an IoT or IIoT device manufacturer gives a customer a first level SBOM for the device, the customer will just know the names of the software products installed in the device – the equivalent of what they could learn from running Windows Explorer on a standalone server. The IoT customer won’t know anything about components of those products, unless they have an SBOM from the supplier of each software product installed in the device.

At least, having a first level SBOM, the device customer will know the names of the software products installed in the device and their suppliers’ names. What will they hear when they contact those suppliers to ask for an SBOM? Most likely nothing at all, or perhaps “Get lost”. The problem is that the device customer isn’t a customer of the suppliers of the software installed in the device; the device manufacturer is. It’s very likely that only the manufacturer will be able to get the SBOMs, and perhaps not even them..

Thus, it isn’t likely that IoT or IIoT device customers will be able to obtain SBOMs for software products installed in intelligent devices they own or utilize. Of course, they can beg the device’s manufacturer to please, pretty please get the SBOMs for them – but I doubt that will be a very productive exercise.

But more generally, why should analysis of cyber risks found in intelligent devices be exclusively the responsibility of end users? After all, the manufacturer chose the “first level” software and firmware products that are installed in the device. It’s their job to make sure they’re secure; if they’re not, it’s the manufacturer’s responsibility either to get the supplier to fix vulnerabilities in their software, or to replace the software with a more reliable product.

But how can the device customer even “audit” the manufacturer’s performance in vulnerability management? If the customer receives an SBOM that lists the first-level software and firmware products installed in the device (which will hopefully start happening soon with medical devices), that at least puts the device customer on the same level as the server administrator that identifies each software product installed on their server using Windows Explorer: The device customer will usually be able to learn about vulnerabilities in those products, although they won’t learn about vulnerabilities found in the immediate components (dependencies) of those products.

However, why should the device customer even have to look up vulnerabilities in each of the “first-level” software and firmware products installed in the device? Each customer presumably has exactly the same set of software products installed in their device as any other customer does. If there are say 10,000 customers of a device, why should every one of them have to perform exactly the same set of steps to learn about vulnerabilities in the device, when the manufacturer can perform those steps for all 10,000 customers at once?

And not only can the manufacturer perform those steps themselves, but they should be performing them already. That is, they should be doing it, assuming they care about securing the devices they sell. For example, if there are say 15 software and firmware products directly installed in the device (there may be a lot more, of course), why can’t the manufacturer share with their customers every exploitable vulnerability associated with those products?

Besides reporting each vulnerability to their customers, the manufacturer also needs to register the device itself in the National Vulnerability Database (NVD) and report all exploitable vulnerabilities they identify in the device. This includes both vulnerabilities due to code written by the manufacturer and vulnerabilities due to code written by the suppliers of the third party software and firmware products installed in the device.

When device manufacturers start doing this, vulnerability management for intelligent devices, including medical devices, will be much easier. The customer will just have to search on the device’s CPE name in the NVD, in order to learn about all exploitable vulnerabilities in the device. This is what software users are supposed to be able to do now. Why shouldn’t device users be able to do the same thing?

I hope the FDA includes this in their device security requirements, along with requiring an SBOM for the device.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at

Friday, December 30, 2022

“Minimum elements”, Bigfoot, and other myths

The NTIA Minimum Elements Report describes a number of practices regarding SBOMs that might qualify as “best practices" in a lot of people’s lexicons. However, it seems both NTIA and CISA are forbidden from using the superlative form of any adjective, so the report’s title uses the less imposing “minimum elements” phrase.

Despite the fact that the Report states, “The minimum elements should not be interpreted to create new federal requirements”, some software security practitioners – and not just those at federal agencies - have glommed onto them as de facto requirements they need to include in every software contract they write. And indeed, the document makes it more likely that people will make this mistake by the fact that the word “must” (underlined, no less!) is used at various places in the document.

But, dismissing the question whether or not the report really provides “binding requirements”, the more important question is whether the report in fact describes best practices, which might be suited to ask for in software contract negotiations. That is, if the Report says a supplier must do something, should you really try to force your supplier to do it? 

My answer to that question is, “Sure, if you don’t care whether the supplier tells you to go pound sand and find another supplier that can provide as functional a product, with as good support, at an equivalent price.” I say that because, in requiring your supplier to follow all of the provisions in the Minimum Elements Report, you’re really asking the supplier to commit to performing impossible tasks. Here’s one of the more egregious examples, which I know has caused a lot of suppliers heartburn.

Let’s look at the “minimum data fields” in the table on page 9 of the report. Most software security practitioners would probably assume this phrase means that each of these fields must be included in the SBOM, for every component. For three of the fields (the first, sixth and seventh), compliance is trivial, since they only require one response that applies to the entire SBOM. However, for the second through fifth fields, it’s just about certain that the supplier will be way out of compliance if you take the provision literally and require them to provide a response for every field.

Fields 2 to 5 all have to be filled in for every component in the product. Let’s look at the fifth field, Dependency Relationship. Essentially, this refers to a possible “parent/child” relationship between two components in the product (and the product itself is a component, bearing the august title of “primary component”); of course, the “child” component is itself a component of the “parent”. Let’s do some arithmetic regarding this field:

1.      The average software product includes around 150 components, although there are many products that have thousands or even tens of thousands of components.

2.      You may wonder why a supplier wouldn’t be able to provide a response in this field, for any component of their product. After all, isn’t the supplier supposed to know every component in their product? That means they should also know the relationship of every component to every other component – with the three possibilities being child, parent, and not applicable (when neither component is a parent of the other).[i]

3.      Let’s look at “Dependency Relationship”. That sounds simple, right? If a component is included in the software your organization buys, you should not only know how the component is identified (name, version string, etc.), but you should also know what relationship that component has to other components. However, keep in mind that a Software Composition Analysis (SCA) tool will list components in a product, without providing a “dependency graph” that describes their relationships. In order to truly understand all the relationships between components in their software product, a supplier needs to have an SBOM for every component.

4.      To make it easier to describe what we’re working with when it comes to component relationships, people often talk about “tiers” of components within a product. The first tier consists of components that are direct dependencies of the product itself. These are the components the supplier has themselves included in the product. The product is the parent, and each first tier component is a child.

5.      The remaining tiers – second, third, etc. – all consist of components that are children of the components in the previous tier. For example, the second tier consists of components that are children of first tier components. In fact, this isn’t an exact description, since sometimes Component A is a child of Component B, while B is also a child of A – so there’s no good way to define tiers. But, the concept does help in making high-level statements.

6.      In practice, it will usually be extremely difficult to obtain an SBOM for any component that isn’t a direct dependency of the product itself. In other words, while it will be difficult (at least currently) to obtain an SBOM for any component other than the ones in the “first tier” of components, it will be well-nigh impossible to obtain an SBOM for any component on a tier lower than the first one. The main reason why this is the case is that the supplier of the product, even though they’re the direct “customer” of every supplier of a first-tier component, has no direct relationship to any of the lower-tier[ii] suppliers.

7.      How do we estimate the number of components in the product? That’s simple: Every component can be assumed to itself have 150 components, since a component can often be a standalone product on its own. Thus, the product contains 150 X 150 components, or 22,500 in total.

8.      Now, how do we estimate the number of dependency relationships among those components, since that’s what this field is calling for? That’s also simple: Since every component has one “parent” and 150 “child” components, there will be 22,500 * 151 relationships, which is approximately 3.4 million. However, since you can’t count both the parent/child and the child/parent relationship of the same two components as different, this means we have to divide this number by two, yielding 1.7 million relationships.

9.      Thus, in order to completely fill in this field as “required” by the Minimum Elements Report, the supplier of a typical software product or intelligent device has to obtain 22,500 SBOMs and use them to identify 1.7 million dependency relationships. If we assume that obtaining each component’s SBOM (or perhaps creating it using a software composition analysis tool) takes just one hour (a wild underestimate, of course), and if we assume that identifying each relationship takes just one minute (also not at all realistic), this will require 22,500 + 28,050 hours, which equals about four years. Of course, that’s for one SBOM with 150 components, not one SBOM with 10,000 components; using rough calculations, the latter will take…a lot longer.

The moral of this story is that, if a supplier of a typical software product with 150 components commits (perhaps in a purchase agreement) to listing all Dependency Relationships of each component, they’ll be committing to do the impossible. In fact, of six “must” items that I count in the Minimum Elements Report, five of them will almost always be impossible to fulfill completely.

What this means is that, if your organization wishes to start obtaining SBOMs from your software suppliers, you will be shooting yourselves in the foot if you try to require the suppliers to comply with specific rules, including what’s in the Minimum Elements Report.

Instead, if the supplier doesn’t already have a specific program for producing and distributing SBOMs to customers, you need to have a discussion with the supplier about what they know they can do at this point. You can ask questions like,

·        How often can they provide an SBOM? If they can provide one with every major new version, that would at least be a start.

·        Do they have a rough idea of the number of minimum data fields they’ll be able to fill in? Even if they say they can only provide information on a small number of first-tier components, that’s a lot better than not providing any information on any components. Call it a win for now.

·        What format will they use for the SBOM? If they can provide it in one of the two primary machine-readable formats, that is good. However, machine-readable SBOMs, require a machine (consisting of hardware and software) to utilize them. There are literally no examples of what I call a “complete” SBOM consumption tool currently available, although there are a few open source products like Dependency-Track and Daggerboard that do at least part of what a complete tool would do. Since there are vendors that provide services that utilize SBOMs for vulnerability management purposes, you should explore what you can get from them. You would just need to have the supplier provide their SBOM (and any accompanying VEX documents) to the service vendor.

·        What fields will the supplier include in their SBOM, besides the minimum ones? An SBOM with just the seven minimum fields won’t be terribly useful; there are a number of fields that will be needed for specific use cases. You and the supplier need to discuss what specific fields they will add to the minimum ones. This white paper from Fortress Information Security (a company for which I consult) contains good suggestions on additional fields you may want to ask for.

A supplier may tell you they can’t provide SBOMs at all. Especially if you’re part of a federal agency, you shouldn’t accept such a statement. If the supplier is concerned about software security, they should be producing SBOMs for their own use, meaning they can’t plead some sort of technical challenge. And if they aren’t using SBOMs now to learn about vulnerabilities in their products, why is your organization still buying from them? There’s no real question in the software developer community nowadays that it’s essential to track component vulnerabilities using SBOMs.

I do want to point out that a number of suppliers are worried about providing SBOMs, since they think they will give away IP if they do. This probably isn’t a valid objection in most cases, but at this point, the last thing you want to do is try to force the supplier to do something they think will jeopardize their business. You should tell them they don’t need to fill in any field for which their answer might contain IP.

At this point, with few suppliers regularly distributing SBOMs to customers, the important thing is to get your supplier to distribute something, even if it consists mostly of empty fields. We can save perfection for later.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at

[i] I should point out that neither the CycloneDX nor SPDX formats uses the words “parent/child” to describe component relationships. But since the two formats use different terms to describe relationships (CycloneDX takes a somewhat different approach to describing these relationships), and since most people have an intuitive understanding of “parent/child”, I’ll use those terms here. 

[ii] You may notice that there are two completely contradictory metaphors used, when discussing component relationships. One metaphor is that of ground layers, with the product itself on the top and the components found in multiple layers below the product; in this sense, the tiers that are farther from the product itself are referred to as “lower”. The other metaphor is the dependency tree, whose roots are in the ground and branches spread into the sky. The product is essentially the base of the tree, and the components are on branches which themselves branch off. Every branch is a different “layer” or “tier”. In this sense, the more distant tiers are “higher” than the product itself. 

Don’t hold me to either one of these metaphors. Just think of distance of a component from the product itself; the farther away it is, the higher the number of its tier.

Monday, December 26, 2022

Rethinking VEX

Recently, my friend Kevin Perry sent me this link to an article in CyberScoop. It gives a fairly good summary of where SBOMs currently stand: End user organizations (i.e. organizations whose primary business isn’t developing software, which even includes the business side of software developers) aren’t using SBOMs in any substantial way at all. But developers are using them very heavily to manage vulnerabilities found in components of the software they’re developing (the person who wrote the article didn’t know how successful developers have been in using them, since that’s not commonly known. However, it’s very possible that the majority of software developers still don’t utilize SBOMs to secure their products at all).

I’ve compared current SBOM utilization to a string which is lying flat on a table. Many developers, seeing how useful SBOMs have been for them, would like to distribute them to their end users, but the end users aren’t asking for them because they have no idea what they’ll do with SBOMs if they receive them. The developers are pushing on their end of the string, but without the end users pulling on their end, the string is going nowhere.

What will get end users to change their minds about SBOMs, so they start seeing some real value in them? It’s not just persuading end users that SBOMs will help them secure their networks; there’s been enough written about that subject – and attention paid to it - that this can no longer be the primary obstacle for end users, or even a major one.

One real obstacle for end users is the lack of easy-to-use and low cost tools and third party services to “process” SBOMs for vulnerability management purposes. The small number of tools that are available today address parts of the software component vulnerability management process, but what I consider the most important part – passing vulnerability data to the vulnerability and configuration management tools that are now deployed by end users – is currently being addressed by nobody.

However, this problem isn’t a fundamental one. The tools will come when there’s clarity about how SBOMs can be successfully utilized by end users. But there are still some big obstacles standing in the way of that clarity. The biggest of these obstacles has to do with the fact that, even though there are lots of tools (including open source tools like Dependency-Track) that will look up vulnerabilities for components listed in an SBOM, around 95% of those component vulnerabilities won’t in fact be exploitable in the product itself.

This means that, if the user takes every one of the component vulnerabilities it finds seriously and contacts the supplier’s help desk to find out when they will be patched (as well as looks for those vulnerabilities in the products where they’re supposedly found), 95% of their time (and the help desk’s time) in this effort will be wasted. As Tony Turner pointed out recently, given that security staff are already overwhelmed by the real vulnerabilities their scanners find on their networks, the last thing they need to do is waste time tracking down false positives.

Of course, this problem is precisely the one that VEX documents were supposed to solve. But VEXes are currently going nowhere. No supplier is regularly distributing them to users, whereas, if they were to work as intended, oceans of VEX documents would need to be coming out daily. There are specific reasons why this isn’t happening (like a lack of guidance on how to put a VEX together in either the CSAF or CycloneDX VEX formats). Those reasons do need to be addressed, since there are still important use cases for VEX documents.

However, I now realize – after having participated in just about every NTIA and CISA VEX workgroup meeting since they began in mid-2020 – that VEX as currently conceived (and even including my real-time VEX suggestion) will never achieve the primary purpose for which it was intended: shrinking the user’s list of component vulnerabilities, so that only the 3-5% that actually are exploitable will remain. Given the huge number of VEX documents that would need to be issued in order to achieve that goal (perhaps hundreds or even thousands, for a single SBOM), and the fact that the user will never be able to be sure there aren’t still some component vulnerabilities that aren’t exploitable - but for which a VEX statement has yet to be issued - this approach is simply not a good one.[i]

But the current VEX idea, which I now think is mistaken, didn’t spring out of nowhere; there are two assumptions underlying it that I now believe to be wrong. Only by removing those assumptions will we be able to identify an effective solution to the component vulnerability exploitability problem.

The more important of the two assumptions is that this is fundamentally a user problem. By that, I mean the assumption that managing vulnerabilities due to components in a software product they utilize, is the responsibility of the end user, not the supplier. In this view, it’s up to the users to look up component vulnerabilities after they receive an SBOM, then use the VEX information to winnow them down to just the exploitable vulnerabilities.

However, I no longer believe this is fundamentally a user problem. Users need the information found in SBOMs and VEXes, because suppliers haven’t previously been doing a good job of fixing vulnerabilities due to components in their products. A 2017 study by Veracode said “A mere 52 percent of companies reported they provide security fixes to components when new security vulnerabilities are discovered. Despite an average of 71 vulnerabilities per application introduced through the use of third-party components, only 23 percent reported testing for vulnerabilities in components at every release.” One hopes that this situation has improved in five years, but it’s certainly possible that not even a bare majority of suppliers are doing a good job in this regard now.

Who introduced the components into the products? It wasn’t the users. Yes, the users benefit from lower software costs and improved functionality, achieved through use of third-party components. But the suppliers certainly reap the lion’s share of the benefits of using components. Frankly, they should provide the lion’s share of the effort required to identify and fix exploitable vulnerabilities in those components, both before the software is distributed to customers and afterwards.

Moreover, the supplier’s costs for identifying component vulnerabilities should be close to zero, since they’re already identifying them now. And if they’re not, shame on them – they should be. By instead putting the burden of this on the user, the suppliers are essentially saying, “It’s more efficient to have each of our customers, even if there are thousands of them, perform exactly the same analysis that we’re already performing, than it is for us just to distribute the information we already have.”

Think about it: If done right, each user’s effort to identify component vulnerabilities in a product will yield exactly the same result as would the supplier’s effort, which they should already be performing. Why is it that, if there are 10,000 customers for a product, it’s expected that they should each perform a set of steps that are the same as what the supplier is performing now? Moreover, by intention all those users will at best achieve exactly the same results as the supplier has already achieved on their own. What’s wrong with this picture?

But, as I said earlier, around 95% of the component vulnerabilities that a user or supplier identifies (by searching for the component’s name in a vulnerability database like the NVD) will be false positives. VEX documents are supposed to fix that problem, since they’re intended to let the user (or specifically their component vulnerability management tool, no examples of which exist in complete form at the current time) separate the 5% wheat from the 95% chaff.

However, beside the fact that it’s highly unlikely that VEX documents will ever be produced in the volume that’s required to accomplish that purpose, the bigger question is why there needs to be a document at all. After all, it’s the supplier who knows how their software is put together and in most cases needs to make the judgment whether or not a particular vulnerability is exploitable in their product. If the supplier takes responsibility for identifying component vulnerabilities in their products, they can also easily take responsibility for removing the 95% of component vulnerabilities that aren’t exploitable, leaving just the exploitable 5%. Since the exploitability information will never have to leave the supplier (except to go to a service provider who may be performing this service on behalf of the supplier), it can be communicated using any secure internal method.

In other words, it would be far more efficient, and immensely more effective, if the supplier looks up the component vulnerabilities themselves (or outsources this effort to a third party services provider) and then removes the 95% that aren’t exploitable. Then, they would provide this machine-readable list of exploitable vulnerabilities to their customers. The customers could search diligently for every vulnerability on the list, knowing that – to the supplier’s current knowledge – every one of them is a real vulnerability in the product.

Note 12/26: When I put this post up on LinkedIn today, Steve Springett quickly pointed out that the VDR report described in the revised version of NIST 800-161 also lists exploitable vulnerabilities in a product. So why do we need another document that does that? I realized I’d been thinking the user would naturally assume that, when they receive the document I just described, it means that any other vulnerabilities they find for a component of the product are not exploitable.

However, I now realize that the user probably won’t, and moreover they shouldn’t, assume that all other component vulnerabilities they find aren’t exploitable; therefore, the document should list both exploitable and non-exploitable component vulnerabilities. For one thing, the document I describe will be produced at a certain day and time. Even if it were true at that time that all other component vulnerabilities in for example the NVD weren’t exploitable, it’s inevitable that more component vulnerabilities will appear later on.

The report will have to be regularly updated, but it’s inevitable that there will be periods of time in which there are more vulnerabilities applicable to components than the ones described. If the non-exploitable vulnerabilities aren’t also listed in the report, the user will be led to believe that any other component vulnerability they discover isn’t exploitable, when in fact that may not be the case.

Of course, anyone who knows anything about cybersecurity will now scream out, “Are you crazy? You want to have each supplier send out documents that list each exploitable component vulnerability in each of their products to every one of their customers? How many nanoseconds do you think it will take before each of these lists shows up on Vladimir Putin’s desk?”

And here’s where the second incorrect assumption appears: It’s the assumption that the only good way to provide this information to customers is to send out a document, even a machine-readable one. Instead, I think this information needs to be provided in a strongly authenticated, machine-to-machine transaction that is initiated by the customer, not the supplier. Moreover, there needs to be strong encryption of the transaction.

And if that isn’t good enough, there are certainly ways to add security to the transaction. For example, the supplier might send – through a different encrypted, customer initiated transaction, perhaps to a different device on the customer end – a list of the component vulnerabilities that can currently be found in the NVD (or other vulnerability database, like OSS Index) for the version of a product that is described in a recent SBOM. This list would consist of the CVE identifier for each vulnerability, as well as a unique short identifier for the vulnerability, consisting of randomly selected alphanumerics.

The list of exploitable vulnerabilities would be sent separately from the above list. It wouldn’t show the actual CVEs, but just the short identifiers for them, along with their status (exploitable or not); of course, this list could still be encrypted. And if that isn’t enough security, I’m sure there are more security layers that could be placed on top of that, when required by the nature or use of the software product.

There’s a lot more that can be said about this proposal, as well as many loose ends that will need to be tied up. I’d appreciate hearing anybody’s comments on this. This idea will be discussed in a more participatory context at some point. In the meantime, I’m not saying we need to drop the idea of VEX documents or of real-time VEX, since there are use cases for both types of notifications.

But I don’t want to play down the magnitude of the problem I’m trying to address. Without at least partial resolution of the vulnerability exploitability problem (whether it’s VEX or something else), SBOMs will never be widely (or probably even narrowly) used.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at

[i] I still believe my real-time VEX idea – which will be submitted as part of a standards application to the IETF next year – is the best way to handle VEX as currently conceived. It will be much easier on both suppliers and the user’s consumption tools if “not affected” status notifications can be provided through an API, rather than by preparing and distributing a machine-readable document.

Friday, December 16, 2022

A new cop on the IoT security beat

In October, I posted about the fact that my client Red Alert Labs had become certified as an assessor by the ioXt Alliance, the global standard for IoT security; they’re only the eighth organization to receive this honor. Now, the Alliance has put out a press release to announce this fact:

NEWPORT BEACH, Calif. — Dec. 15, 2022 — The ioXt Alliance, the global standard for IoT security, today announced the addition of Red Alert Labs, a Europe-based Cybersecurity Lab specialized in IoT, to the ioXt Authorized Labs certification program. Authorized labs are the exclusive test providers for the ioXt Alliance and perform all testing required for devices to be certified by ioXt and to bear the ioXt SmartCert label, which provides security assurance to consumers and enterprises.

Red Alert Labs (RAL) is an IoT security provider helping organizations trust IoT solutions throughout their lifecycle. RAL provides comprehensive IoT security by design, risk management, consulting, audit and certification services supported by automated processes. RAL provides assessments and certifications of connected devices based on multiple standards, including IEC 62443, Common Criteria, ETSI 303 645, and NIST 8425. RAL is also involved with the European Union Agency for Cybersecurity (ENISA) to develop the EUCC scheme for ICT products and EUCS scheme for cloud services in the context of the Cybersecurity Act in Europe.

Ayman Khalil, managing partner and COO of Red Alert Labs, said, “Given our experience performing IoT device evaluations and certifications for various standards like ETSI 303 645, we are quite pleased to be working with ioXt Alliance, both for SmartCert certifications and for the upcoming U.S. IoT device security labeling program. IoXt is working closely with NIST, in accordance with the executive order given by the White House, in supporting the development of that program.”

“Authorized labs are important organizations in the ioXt Alliance as they provide ioXt certification testing to ensure devices are secure for consumers and businesses to use,” said Jan Bondoc, vice president of information technology at the ioXt Alliance. “We’re very pleased to welcome Red Alert Labs as an Authorized Labs partner to work with us to advance security in the IoT industry.”

With profile creation by top-tier companies in technology and device manufacturing, the ioXt Alliance is the only industry-led, global IoT device security and certification program in the world. Devices with the ioXt SmartCert label give consumers and retailers greater confidence in a highly connected world.

ioXt certification includes both security controls implemented in a connected device and the manufacturer’s security practices. An example of the former is whether security updates are applied automatically when possible. An example of the latter is whether the manufacturer published a policy to notify customers when support will end for their product.

Besides assessing and certifying connected devices and their manufacturers, RAL helps end-user organizations assess the cybersecurity risks they face from devices they are considering for procurement. After procurement, RAL helps those organizations assess and mitigate security issues identified in devices they use. For example, RAL will soon provide services based on the NIST.IR 8425 cybersecurity framework for connected devices, developed by the U.S. National Institute of Standards and Technology (NIST).

About the ioXt Alliance

The ioXt Alliance is the Global Standard for IoT Security. Founded by leading technology and product manufacturing firms, ioXt is the only industry led, global IoT product security and certification program in the world. Products with the ioXt SmartCert give consumers and retailers greater confidence in a highly connected world. Learn more at

About Red Alert Labs

Red Alert Labs is an IoT security provider helping organizations trust IoT solutions. An independent cybersecurity lab with a disruptive business offer to solve the technical and commercial challenges in IoT. Its expertise has been recognized by numerous awards. Red Alert Labs is a valued member of IoXt Alliance, EUROSMART, IoTSF, CCC, ACN, SYSTEMATIC, CEN-CENELEC, and ECSO.

I’ve been working with Red Alert Labs for a year and a half, and I can attest that they’re a high quality organization. Note they work with both IoT device manufacturers and end users. In fact, I and Isaac Dangana of RAL wrote an article that was published this summer, on why IoT manufacturers need to follow different practices with respect to SBOMs and software component vulnerability management than suppliers of “stand-alone” software. Intelligent devices (especially medical devices) introduce a lot of unique security concerns. Given the rate at which devices are proliferating, there’s lots of work to be done!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at