Thursday, December 30, 2021

Who should be responsible for software component vulnerability management?


I had a Road to Damascus-type incident recently, except that, unlike in the original incident, I wasn’t blinded and I didn’t fall off my horse.

What led to my incident? I’ve become increasingly concerned of late about the prospects for consumption of software bills of materials (affectionately known as SBOMs). I’m not worried about production: software suppliers are already producing lots of SBOMs for their products and reaping a lot of benefits from doing so. But those benefits are strictly internal; few suppliers are distributing their SBOMs to their customers, and close to none are doing it with any regularity (in general, a new SBOM should be released whenever there has been any change at all in a software product).

Given that they’re producing lots of SBOMs, why aren’t suppliers distributing them to their customers? Is it because they’re all meanies and they don’t want to share the benefits of using SBOMs with their customers? No, it’s much simpler than that: They’re not sharing them because the customers aren’t asking for them.

So why aren’t the customers asking for SBOMs? There are two simple reasons:

First, a lot of software-using organizations don’t feel like making the (admittedly substantial) investment in time required to start using SBOMs intelligently, at least currently. The people inside these organizations who would be called on to find some use for the SBOMs (i.e. currently overworked security people, fresh from losing their holiday break to log4j, thank you very much) would have to invest a lot of time learning about SBOMs and trying to think of ways their organizations could use them (since it seems the end users within their organizations aren’t asking for them either).

Currently, anyone who’s starting with no knowledge of SBOMs needs to read about five or six NTIA documents and synthesize the sometimes conflicting statements in them into a single narrative (which even then will include a number of gaps that haven’t been addressed by the group yet). Until either the learning burden diminishes, or someone can do all of this learning for them (and maybe transfer the knowledge to their brains via a head-to-head USB cable – have humans evolved USB ports yet?), these people – and the organizations they work for – aren’t going to be interested in SBOMs.

Second, customers who do see benefit in having SBOMs have done enough reading to know that the two major SBOM formats – SPDX and CycloneDX – are both proudly machine-readable. Yes, you can get them in non-machine-readable formats like XLS, but given that for example the AWS client has 5600 components (as Dick Brooks of Reliable Energy Analytics has pointed out), do you really want to try to deal with all of those in a spreadsheet?

But what happens when this second group of customers looks around for easy-to-use low-cost or open source vulnerability management tools that can ingest SBOMs (and later VEXes, since the two need to go hand in hand)? They don’t find them. I believe the best SBOM consumption tool for vulnerability management purposes is Dependency Track, an open source tool developed under OWASP. It was originally developed in 2012, about five years before the term SBOM started being widely used in the software community.[i]

Dependency Track does all the basics required for software component vulnerability management and is widely used by developers. It just requires that a CyloneDX SBOM be fed into it (or it will create one from the source code). Then it will (among other tasks) identify all vulnerabilities (CVEs) in the NVD that apply to components and update this list as often as required. It does suffer from the limitation of not being able to ingest VEXes – but VEX is so new (and still undergoing modification) that no other product currently supports this format, either.

But since Dependency Track is an open source tool that requires more user involvement and knowledge than just pushing a Download button and then hitting Yes or Next a few times, there will always be a lot of users who won’t want to get involved with it. This despite the fact that IMO it’s as of now the only show in town (but that being said, I think there could be a lot more non-developers who would start using D-T for component vulnerability management purposes, if they were informed about easy it is to use the tool once it’s installed. Steve says there are a large number of these non-developer users now; in fact, OWASP may sponsor a webinar soon, focused on exactly this use case. If and when that happens, I’ll be sure to let you know).

To sum up what I’ve said so far, I don’t see demand for SBOMs jumping significantly until two things happen. First, there needs to be a single document that walks a technically-oriented reader, who has no previous knowledge of or experience with software development or SBOMs, through the entire process of using SBOMs for vulnerability management purposes.

Second, there needs to be one of these two items (and hopefully both):

·        An easy-to-install open source (or low cost) tool that at a minimum a) ingests an SBOM for a product and extracts component names - hopefully in CPE format, b) regularly (preferably daily) searches the NVD for vulnerabilities applicable to components identified in the SBOM, and c) removes from that list any vulnerability that has been identified by the supplier of the product as not exploitable in the product itself (the latter information may someday be communicated in a VEX document, but it might be communicated in other ways as well).

·        A third-party service that processes SBOMs and VEX information and provides to the customer the same list of exploitable component vulnerabilities provided by the hypothetical tool described above. Of course, since there are other sources of component risk besides just vulnerabilities listed in the NVD (such as the “Nebraska problem” and three others, mentioned in this post), the service will probably address these risks as well. It might also address risks due to vulnerabilities not listed in the NVD, but identified in other databases or other non-database sources.

Of course, neither of these items is available today – otherwise I wouldn’t have written this post. I’m sure at least the service will be available by the August deadline for federal agencies to start requesting SBOMs from their software suppliers. I’m not sure about the tool, mainly because VEX isn’t finalized yet, and SBOMs without VEX information just aren’t going to be very useful. A third party service provider would hopefully be able to get VEX-type information directly from the suppliers, whether or not VEX is finalized – since the supplier should be quite interested in getting the word out about unexploitable component vulnerabilities, and the supplier and the third party can easily work out their own format for communicating VEX-type information.

So is what I’ve just described my Road to Damascus moment? No, this is something I’ve come to realize over the last four months. My RtD moment occurred when I asked the question – prodded in part by a suggestion from a friend and client who I won’t name, since I haven’t had a chance to ask his permission for this – why software users should have to bear the responsibility for identifying exploitable software component vulnerabilities. I now think the software suppliers should bear that responsibility. I have three reasons for saying this:

1.      Currently, you usually learn about vulnerabilities in a software product that you operate from the supplier of the product. You receive an emailed notice directly from the supplier, or at least the supplier reports the vulnerability to the NVD, where you discover the vulnerability (hopefully, both will happen). But a lot of exploitable component vulnerabilities aren’t currently reported by suppliers, using either method. What is it that makes component vulnerabilities different from vulnerabilities identified in the supplier’s own code, other than the fact that you won’t normally be able to find out about component vulnerabilities without…envelope, please…an SBOM? None that I know of.

2.      Does it really make sense to say to each customer of a software product, “You’re responsible for finding component vulnerabilities in Product A and maintaining those lists day in and day out”? This even though the supplier is already gathering this information (or at least they should be)? If the suppliers provide this information to their customers, the latter will only need SBOMs and VEXes (and a tool or service to process them) as a way of checking to make sure the supplier hasn’t left any exploitable component vulnerabilities off the list (of course, this assumes that all suppliers immediately accept this responsibility. Nice idea, but ain’t gonna happen). But the users won’t bear the responsibility for learning about the vulnerabilities in the first place.

3.      The third reason makes a lot of sense to me from an economic point of view: The party that introduces a risk for their own benefit should bear the burden of remediating that risk. As everyone knows, developers’ use of third-party components (both open source and proprietary, but mostly the former) has ballooned in recent years, but so has component risk. Log4j, Ripple20, Heartbleed, Apache Struts/Equifax and other disasters have been the result. In fact, having the suppliers be responsible for identifying and tracking component vulnerabilities isn’t a big increase in their current responsibilities, since they should already be doing the hard part now – patching those component vulnerabilities after they identify them and determine they’re exploitable (which, fortunately for the suppliers, is usually not the case).

But it’s not like the users won’t have to do anything at all about component vulnerabilities. They’ll still need to track down and identify – using configuration and vulnerability management tools – the vulnerable software on their network and apply the patches for exploitable component vulnerabilities, which will hopefully be quickly forthcoming from the suppliers. But instead of waiting for the vendors of those tools to develop the capability to ingest SBOMs and VEXes in order to obtain information on component vulnerabilities (something it seems not a single major vendor has done so far), the supplier of the software (or again, a third party they’ve engaged for this work) could provide a feed that follows the vendor’s API – meaning the vendor will have to expend exactly zero effort in order to become “SBOM-ready”. This alone is a huge benefit to users, since waiting for the tool vendors to become SBOM-ready seems like Waiting for Godot: He can’t come today, but for sure he’ll come tomorrow…

And there’s another huge advantage to the idea of making the suppliers responsible for component vulnerability management: the need for VEXes largely goes away. This is for a simple reason: VEX was designed primarily for a supplier to communicate to customers – who have SBOMs and are looking up component vulnerabilities in the NVD – that certain component vulnerabilities aren’t exploitable in the product itself. The user needs VEXes, because if they start looking in the product for every component vulnerability listed in the NVD, some huge percentage of the time they spend doing this (as well as the time they spend calling their supplier about component vulnerabilities they find in the NVD) will be wasted. The customer needs the VEX in order to winnow the list of component vulnerabilities down to only the small percentage that are exploitable.

But if the supplier itself is responsible for producing the final list of exploitable vulnerabilities, the communication that would otherwise require a VEX would be completely internal: Whoever determines whether a vulnerability is exploitable or not would send an email to the person who is responsible for the final list of exploitable vulnerabilities for the product (in fact, they may be the same person). The email would say something like, “Remove CVE-2021-12345 from the list of exploitable vulnerabilities in Product X. Even though the NVD shows that vulnerability is found in Component A, and even though Component A is included in X, CVE-2021-12345 isn’t exploitable in X. This is because A is a library, and we never included the vulnerable module in X in the first place.”[ii]

There will still be some need for VEXes, including what may become the main need for SBOMs – to make sure the supplier is properly identifying exploitable component vulnerabilities in their products. But VEXes won’t be a gating factor for the use of SBOMs at all, which is what they are today.

My unofficial slogan is “Often wrong, but never in doubt.” I have to admit that it seems too good to be true, that having the supplier take responsibility for component vulnerability management would be a win-win in so many ways. If you think I’m full of ____, please let me know. But I will interpret your silence to indicate total agreement (unless you fell asleep before you read this).

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] The leader of the Dependency Track project is the same person who leads the CycloneDX project, also under OWASP: Steve Springett, an active member of the NTIA-but-now-CISA Software Component Transparency Initiative (and someone who lives about ten miles from me in suburban Chicago. I suggested to Steve that we meet for lunch recently, but he wanted to wait until we can eat outside. I’ve been looking for restaurants in Chicago that have outdoor seating in January, but it’s hard to find any, for some reason). 

[ii] This is the situation described with respect to the Log4j vulnerability in this post.

Sunday, December 26, 2021

Something else that would have helped with log4j

There have been a number of good email discussions on log4j in the mailing lists of the NTIA Software Component Transparency Initiative (these will be replaced by another list under CISA’s auspices soon). Last week, the discussion in one list veered into how the National Vulnerability Database (NVD) doesn’t provide a lot of help on the log4j problem, because the vulnerability was just located in one component module of Log4j, called log4j-core.

There are potentially lots of components in log4j[i] (here is a list of them all), but they’re never all installed at once. What’s installed will vary according to the version of log4j, as well as the needs of the software developer that included log4j in the product they were developing. To quote Steve Springett of OWASP, co-leader of the CycloneDX SBOM format project, and leader of the Dependency-Track project:

Not all modules will be vulnerable. Take a common scenario that occurred over the last week. Many organizations that rely solely on the log4j-api module needlessly upgraded even though they were not affected. Only log4j-core was impacted. Log4j-core has a dependency on log4-api, but not the other way around. So if you’re only using the API and not the core logging functionality, there was no reason to upgrade.

In other words, if it had been possible to learn from the NVD that only the log4j-core module of log4j was affected, that would have saved a lot of people a lot of time (and heartburn) searching for and patching log4j in products that didn’t include log4j-core at all. How would it have been possible to learn this? To simplify things greatly, if the NVD could ingest SBOMs and associate the components of a software product with the product itself, then vulnerabilities that apply to a component of a product could be seen as applying to the product itself.[ii]

However, the logistics of implementing this capability would be formidable, in no small part because the number of entries in the NVD would increase at least 20-50-fold, and the possible relationships among entries would increase exponentially. So don’t look for this to happen very soon[iii]. Until it does, you’ll have to rely on the supplier that developed the software product you run, to tell you whether or not that product is affected by a vulnerability, no matter what “tier” of components it’s on.

And if the supplier tells you they just don’t know this and you’ll have to figure it out for yourself, well…you might want to look for a different supplier. This isn’t an acceptable answer (more on this topic coming soon to a blog near you).

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] And remember, log4j is itself a component of other products, so these are components (also called dependencies) of a component, which are sometimes referred to (not completely accurately) as “second-level” components. As a frequent abuser of this term, I can only defend myself by saying it does help you visualize the idea of “nested” components, at some loss of accuracy. However, Steve Spingett’s comment provides an example of where the idea that components are all nestled in neat little levels breaks down, since he mentions that log4j-api can sometimes be a component of log4j-core, but not the other way around. This means that log4j-api can appear at different levels in different SBOMs – and this applies to every other component as well. In fact, a single component could appear at different levels in a single product, although in general that would be due to a poor development practice. 

[ii] Of course, it would be misleading to associate every component vulnerability with the product that contains the component, since a large percentage – certainly more than 50% of component vulnerabilities - aren’t exploitable in the product itself. It would be up to the supplier of each product to provide notifications to the NVD whenever a component vulnerability isn’t exploitable in the product. The NTIA (now CISA) Initiative is working on a format for these notifications called VEX, but it’s possible there will be other formats as well. And I think that there might someday be another framework for vulnerability notifications, in which non-exploitable component vulnerabilities would never appear in the first place. 

[iii] Steve also pointed out that the PURL specification does permit components of a product to be associated with the product – he pointed to the Sonatype open source software vulnerability database, which is based on PURL (instead of CPE); he also provided the URLs for log4j-api and log4j-core. Note that I don’t know whether the Sonatype database lets a product “inherit” vulnerabilities from its components, meaning that you can see all of the vulnerabilities that apply to any component of a product, when you view the product itself. I doubt it does that.

Thursday, December 23, 2021

Will having SBOMs prolong your life? Absolutely!


There’s been a lot written about the role of SBOMs (software bills of materials) in addressing widespread vulnerabilities like log4j (not that there have been many vulnerabilities as serious as log4j). It runs the gamut from “If you had SBOMs, you’d know in ten seconds whether any of the 10,000 software products running on your network contains log4j, as well as any of their components, components of components, components of components of components…yea verily, unto the 20th degree of component” to “SBOMs will be quite helpful in finding log4j. An SBOM and $4.70 will get you a tall no-foam latte. That will keep you awake as you look for instances of log4j.”

What’s common to just about all of these statements is they’re written in the subjunctive mood: “IF we had SBOMs….” Because we simply don’t have SBOMs now, to any degree that would be very helpful in researching the simplest vulnerability, let alone as widespread a vulnerability as log4j.

I’d like to suggest that it’s not useful to speculate about what SBOMs could or could not do in this or any other case, any more than it’s useful to speculate about whether, if there is a multiverse, there’s a universe in which the Pope is Jewish.[i] What’s much more useful is to ask a question like, “What are the use cases in which SBOMs will help me, and what do I need in order to make those use cases real?”

SBOMs aren’t an all-or-nothing proposition, where you either have SBOMs for all of your software, or you have nothing. It would of course be nice to have a current SBOM for all software that you own (or rent in the cloud), but right now, you’ll be way ahead of the game if you have an up-to-date SBOM for even five software products on your network. After all, that’s probably five more SBOMs than you had last year.

What can you expect to have at the end of 2022?  Executive Order 14028’s mandate for federal government agencies to require SBOMs from their “critical software” suppliers kicks in next August. Even if you’re not a government agency, it’s likely your suppliers will be able to provide SBOMs, since they’re already producing them for the agencies. You should at the minimum start asking for them from your suppliers. And you should especially start asking for a recent SBOM whenever you’re purchasing software.

But the bigger question is what you do with the SBOMs when you get them. While there is a use case for SBOMs for software license management (in fact, that was the first use case), that is mostly applicable to software developers. The SBOM use case for the overwhelming number of users (and the only use case addressed in the EO) is software vulnerability management. I break that down into three types:

The first is responding to emergency vulnerability notices like log4j and Ripple 20, in which the vulnerability (or vulnerabilities) isn’t applicable to particular final products, but to components of final products. While it would be good for you to have an SBOM for each product so that you know which ones contain log4j, what’s even more important is that you lean on your supplier to tell you whether or not log4j is a “first-level” component of their software, and also whether it’s a component of any of their first-level components (i.e. it’s a second-level component). But they shouldn’t just tell you this; they should issue a patch (or patches) to fix the problem.

Of course, if you had an SBOM that included both the first-level components and SBOMs for each of those components, you would then know whether the product is affected by log4j, down to the second level of components. That would be quite helpful, but it’s also unlikely that the supplier will give you an SBOM for each of the first-level components, since they often won’t be available from the second-level suppliers (most of which are probably open source communities). Again, something is better than nothing. Having the first-level SBOM and even a few second-level SBOMs is better than what you have now for almost all of your software: no SBOMs at all.

The second use case is reducing procurement risk – i.e. reducing the risk that you will purchase a product whose supplier doesn’t have a good program for vulnerability management. Notice that I don’t say “a product that is free of vulnerabilities”. Yes, it’s a good idea to ask for an SBOM for a product you’re considering for purchase, then check each of the components in the NVD, to see if there are any serious open vulnerabilities applicable to them (you can define “serious” as you like, but a CVSS score of 7.0 or higher counts as serious in my book. And there could be lower scores that still might be considered serious, if accompanied by other risk factors. If you decide to purchase the product, you should require the vendor, in the purchase contract, to patch any serious component vulnerabilities, unless they say that the vulnerability isn’t exploitable.

However, vulnerabilities appear all the time. Just because you buy a product that’s free of serious vulnerabilities on Monday, this doesn’t mean that on Tuesday, five serious vulnerabilities in components won’t be identified. Often, this is because a researcher just discovered that a certain string of code, previously thought to be perfectly safe, can be exploited in a particular way so that bad things happen – and five of the components of the product happen to have that same code.

What’s most important for procuring secure software is the track record of the supplier. If you can get a current SBOM for the product (and hopefully at least a few previous SBOMs, say at least one for every six months), you can answer questions like:

1.      Does this supplier let components in their product get very long in the tooth before replacing them? If there are a lot of components that are 2-3 years old, or maybe five or six versions behind the current version, this should be cause for concern.

2.      If a serious vulnerability is identified in a component, on average how long does it take the supplier to replace it (this question might be hard to answer, unless you have an SBOM for every 2-3 months in the past. You may not get those, but it certainly wouldn’t hurt to ask for them)?

3.      Are there any open source components that are effectively end of life, since the community that supports the component is no longer providing patches or updates? This is a big problem. Unfortunately, there’s no gun that goes off to let you know that an open source component is EOL. To learn this, somebody (and there are services that do this) needs to watch activity in the project and raise a flag when there’s been no new activity for say six months. Another indicator is that there are serious vulnerabilities that haven’t been patched, months after they were identified.

4.      Does an important open source component suffer from the “Nebraska problem”, described by this famous cartoon? The cartoon illustrates the risk posed by an important open source project that is now dependent on a “single maintainer”. Again, there are services that will do this, but you can also check for “commits” to the project on GitHub. If only one person has committed for say six months or more, you should at least have a conversation with the supplier about this. You might also put a term in the contract that they will replace this component within say six months.

There are lots of other questions you can ask as well, that having SBOMs will help you answer when doing your due diligence for procurement.

The third use case is probably the most important: ongoing vulnerability management for software operated by your organization. This includes a) getting regular SBOMs from your suppliers (say, at least for every major new version, and hopefully every time there has been any change in the software, including just applying a patch); b) keeping a running list of the components in each product; c) regularly checking in the NVD for vulnerabilities that apply to those components; d) adjusting the list of vulnerabilities for any notifications received from a supplier that a component vulnerability isn’t exploitable in the product itself (i.e. the type of notification that a VEX would provide); e) asking the supplier when they’re going to patch any serious component vulnerabilities that are exploitable in the product; and f) confirming that the supplier keeps any promises they make regarding patches.

This is obviously the most time-consuming use case, since it needs to be conducted day in and day out, for as long as your organization operates software (i.e. up until the day you decide computers aren’t worth it, and just buy a bunch of typewriters and postage stamps. At that point - but only then - you can bid farewell to SBOMs). However, this is also the use case for which you’ll be able to find service vendors that will perform these tasks for you, as well as perhaps other risk management tasks that you hadn’t thought of.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Of course, the answer to that question is “Definitely!” If there are an infinite number of universes (which is always the assumption with multiverses, since there would be no way to limit the number according to any scientific principle. After all, the laws of physics will be different in each universe in the multiverse, so you could never identify a constraint on their number), then there’s definitely one in which the Pope is Jewish, in which I’m rich, in which Covid-19 never appeared (now, that would be nice!), etc.

Sunday, December 19, 2021

My comments to NIST on their preliminary SBOM guidelines


NIST is required by last May’s Executive Order 14028 to develop guidelines for federal agencies to comply with the software supply chain provisions in Section 4 (e) by February 6, 2022. NIST was also required to draft these guidelines by November. Rather than prepare this draft as a separate document, NIST decided to kill two birds with one stone and include its contents inside the draft r1 of the venerable SP 800-161 publication (this is the first revision since it was developed in 2016).

When I learned that NIST had done this, I immediately downloaded the draft. My only concern at this point was the draft guidelines on software bills of materials (SBOMs), which are found on pages 242-246. I submitted comments to NIST on the SBOM guidelines. I’m reproducing them here, along with some other material that I didn’t submit with the comments. I’ve reproduced short passages from the NIST document in italics, with my comments on each passage immediately below it. Keep in mind that the EO only applies directly to federal agencies, so I’ve confined my comments to concerns that affect them (although in practice, I doubt there’s much if any difference between what federal agencies will have to do regarding SBOMs and what private sector organizations would need to do).

Ensure that comprehensive and current SBOMs are available for all classes of software including purchased software, open source software, and in-house software, by requiring subtier software suppliers to produce, maintain, and provide SBOMs (p. 243)

It is certainly a worthy goal that SBOMs be available for more than purchased software. That is, whether an agency a) purchases software from a commercial vendor, b) downloads an open source product to run on its network, or c) develops software in-house, it should certainly require an SBOM. But the only one of those items that is directly covered by the EO is software from commercial vendors.

Another category that may or may not be covered by the EO is intelligent devices (i.e. IoT and IIoT devices) and embedded systems. These aren’t included in the NIST definition of “critical software” (which was required by the EO), but I think they should be. When you think of infusion pumps in hospitals, electronic relays that operate the power grid, and PLCs that control operations in oil refineries, among lots of other devices, these are simply too important to ignore. In any case, there should be SBOMs for devices purchased by federal agencies, given their importance.

It would also be wonderful to have SBOMs for open source software, but a federal agency has close to zero control over the online communities that develop it. If they feel like developing an SBOM, they’ll do it, and if they don’t, they won’t. Plus there’s little likelihood that an open source community will develop a new SBOM whenever the software changes in any way, as advised by the NTIA documents.

An agency shouldn’t be found non-compliant with the EO because it failed to require an SBOM for an open source product it downloaded for its own use. What would be more constructive would be a program by NIST or another government agency to develop SBOMs for widely used open source software, since currently I know of no entity that does that, other than perhaps as a consulting project.

Also, this sentence seems to be saying that the way to get SBOMs for all classes of software is to require the suppliers to lean on the first-level component suppliers (which I assume is what is meant by “subtier software suppliers”) to provide SBOMs for the components. The two issues have nothing to do with each other, so this sentence doesn’t make sense overall.

However, the question of component tiers comes up regularly. How many component tiers should you require the supplier to provide in their SBOM? In general, my feeling is that, as long as a supplier can at least include most of the “first-tier” components in their SBOM, that in itself is an achievement (and since there can often be thousands of first-level components, that in itself can be a big challenge).

Note from Tom 2/5/22: Since I wrote this post in December, I've come to realize that it doesn't make a lot of sense to talk about "tiers" of components in a software product. This is because Component A could be a dependency of Component B, while a different instance of B could be a dependency of A - i.e. a circular reference. This is why CycloneDX only displays one "tier" of components at a time (i.e. the direct dependencies of the product itself, or of one of the components of the product). 

However, because the components can be "nested" in the SBOM, the entire "dependency graph" (i.e. all tiers) can be built up by including under every component its list of direct dependencies. This is a way of breaking the circular reference problem. In fact, the name CycloneDX is somehow a play on words on "circular reference", according to Steve Springett, the co-leader of the OWASP group that maintains and enhances the format. I'll also note that one great feature of CycloneDX (or CDX) is that, if you decide  to move (or duplicate) a component within the overall dependency graph, all of the dependencies of the component will follow along with it.

Since the risk posed by each tier diminishes the farther away it is from the first tier (i.e. a component vulnerability is less likely to be exploitable in the product itself, the more tiers there are between the component and the first tier), it’s certainly no tragedy if suppliers just list the first tier of components. Getting SBOMs for second or greater tier components is notoriously difficult (in part because 90% of components are open source, so there’s no supplier to lean on for an SBOM – but also because the supplier of the primary product isn’t the customer of say a second-tier component supplier. Therefore, the primary supplier has to require the first-tier component supplier to lean on the second-tier suppliers, etc. That becomes very difficult and time-consuming, and won’t work at all in the case of open source components). Requiring suppliers just to obtain even the second tier will place a huge burden on them, yielding minimal security benefit.

Maintain readily accessible SBOM repositories, posting publicly when required (page 244)

I’m pleased to see NIST requiring this. For several reasons, I think posting SBOMs is a lot better way of making them available, than trying to push a copy of every SBOM to every customer. Of course, many or most suppliers may want to post the SBOMs on an authenticated portal, rather than make them truly publicly available; other suppliers may be willing to post them publicly. That decision is up to the supplier to make.

Incorporate artificial intelligence and machine learning (AI/ML) considerations into SBOMs to monitor risks relating to the testing and training of datasets for ML models (page 244)

There are certainly ways in which AI can help in creating SBOMs, especially in addressing the “naming problem”. However, not much work has been done so far on this idea, and none at all by the NTIA Software Component Transparency Initiative. Requiring it for EO compliance is unhelpful and places an unnecessary burden on the suppliers.

Develop risk monitoring and scoring components to dynamically monitor the impact of SBOMs’ vulnerability disclosures to the acquiring organization. Align with asset inventories for further risk exposure and criticality calculations. (page 244)

This is an important requirement. However, this is part of a section that starts with “Departments and agencies, where possible and applicable, should require their suppliers to demonstrate they are implementing, or have implemented, these foundational SBOM components and functionality along with the following capabilities” (the three passages above are also part of that section).

The problem is that this passage isn’t a requirement for suppliers, it’s a requirement for the “departments and agencies” themselves. The supplier can’t help with these steps, and the burden of executing them shouldn’t fall on the suppliers. So this requirement needs to be put in a different section.

That being said, I wish to point out here that this is the only requirement in the entire SBOM section that relates to the agencies using the SBOMs. Every other requirement specifies something that the agency needs to ask their suppliers to do or provide. I hope NIST doesn’t expect the agency to simply drop an SBOM in the bit bucket as soon as they receive it – they need guidelines for how they will use it. 

The above passage is simply a very high-level formulation of a subset (albeit an important one) of the uses to which a federal agency can put an SBOM, to help it manage software supplier chain cyber risk. It is like, if I were asked to write up guidelines for safe driving, I simply wrote "Avoid accidents". 

With the EO raising the bar for software verifications techniques and other software supply chain controls, additional scrutiny is being paid upon not just the software the vendors produce, but the business entities within a given software supply chain that may sell, distribute, store, or otherwise have access to the software code themselves. Departments and agencies looking to further enhance assessment of supplier software supply chain controls can perform additional scrutiny on vendor SDLC capabilities, security posture, and risks associated with foreign ownership, control, or influence (FOCI).

Wow! Where to begin on this one?

First, what does it mean to say “business entities within a given software supply chain that may sell, distribute, store, or otherwise have access to the software code themselves”? Once the code is compiled – which is done by the original supplier – no other entity has access to the code, unless they first decompile it (which they're specifically prohibited from doing, as is the customer - and in any case, decompilation is hard to do, and can never be done perfectly). In other words, the first sentence of this section makes literally zero sense, and can be safely ignored. 

The fact is that the supply chain between the supplier and the customer poses about zero cyber risk - in fact, usually the actual bits that constitute the software are transferred directly to the customer when they download it. If there is an intermediary, e.g. a Microsoft dealer, they simply invoice the customer and transfer the license.

This isn’t to say that some organization within the supply chain couldn’t substitute a malicious product for a non-malicious one, or perhaps point the customer to a bogus download site that would provide them a malicious product. But that, like any other software download, should be protected by confirming the digital signature and in some cases a hash value. 

Regarding FOCI (foreign owned and corrupt influence), I understand there are certainly legitimate reasons for raising FOCI concerns, when the issue is hardware components of an intelligent device. But for software components, FOCI has almost no meaning. Consider the following:

1.      As I’ve already said, 90% of software components are open source. Open source communities don’t restrict their contributors to particular countries – in fact, they don’t even have a good way to verify in which country a participant resides. One small proprietary study of open source components in a major software product that I recently saw showed that every one of the communities behind those open source components contained at least one member from Russia, China or Iran. Probably this is very typical for open source components. Should you remove all software from your network that has “tainted” open source components like these?  My guess is you may end up severely limiting your software procurement options if you do.

2.      And even commercial software is developed with teams from all over the world. As Matt Wyckhouse of Finite State pointed out last year, Siemens has 21 R&D hubs in China and employs about 5,000 Chinese staff there. Does this mean you should dump all your Siemens software, as well as the Siemens hardware that runs Siemens software?

3.      And since open source software usually lists its contributors by email address, how do you know that someone from say Cuba isn’t using a Gmail account?

This isn’t to say that, in cases where there might be some reason to be suspicious of the components in a particular product, it isn’t worthwhile for a software customer to conduct a FOCI investigation of software. It also isn’t to say that, if there is a service available that examines FOCI of software components, it wouldn’t be worthwhile for the customer to utilize that service. But it would certainly be a bad idea to make a rigid rule like “We will not purchase any software that contains any open source component that includes a participant with a Chinese email address.”

Include flow-down requirements to sub-tier suppliers in agreements pertaining to the secure development, delivery, operational support, and maintenance of software. (page 245)

In my opinion, this requirement will cost suppliers a lot of money and effort, while producing very little benefit for their customers, the federal agencies. Let’s start with the fact that 90% of components are open source, and the software supplier has literally zero control over the terms under which they acquire open source components.

Regarding the remaining 10% of components, the product supplier does usually have agreements with commercial component suppliers, and they can certainly ask those suppliers to agree to meet certain criteria for development, delivery, etc. But what if a component supplier refuses? Their product is usually low cost and is used in thousands of products. If one of their customers, a final product supplier who is acting at the request of a federal agency, asks them to meet these criteria, they might well decide that the additional paperwork, etc. outweighs whatever they make from that customer.

The final product suppler will then have to tell the federal agency (their customer) that they’re unable to reach an agreement with the one component supplier. What should the agency do at this point? Drop this supplier altogether? Give them an ultimatum to replace the component made by the recalcitrant component supplier with one from a more amendable component supplier, despite the fact that components are seldom direct replacements for one another, and there will often not be any alternative component supplier?

The sentence listed above needs to be qualified with “If possible”, “If applicable”, etc. It shouldn’t be left as an absolute requirement.

Automatically verify hashes/signatures infrastructure for all vendor-supplied software installation and updates (page 245)

There can be many speed bumps in the road to automatic verification of hashes and signatures – in fact, the NTIA Initiative has spent endless hours discussing problems with creating and verifying component hashes, and is still nowhere near reaching a conclusion (there might be a daylong technical meeting next year on this subject, where the participants would be locked in a room – difficult to do for a virtual meeting, I’ll admit – until they come to some sort of agreement on this subject). The words “try to”, “where technically feasible”, “where possible”, etc. need to be included in this passage as well.

Ensure suppliers attest to and provide evidence of utilizing automated build deployments, including pre-production testing, automatic rollbacks, and staggered production deployments (page 245)

It is good if a supplier has all of these capabilities in place, but not all of them do. Is this an absolute requirement, meaning that no federal agency should ever procure software from a supplier who doesn’t meet each of these conditions? I for one believe that the incremental security benefit that would be gained from leaving this as an absolute requirement is more than offset by the fact that agencies would have to abandon suppliers they may be quite comfortable with, or whose product is unique in meeting a particular need of theirs. This requirement should be qualified with “Where possible”, as in the previous cases.

Lines 8461 to 8482 (page 246)

As unaccustomed as I am to saying nice things, I want to applaud NIST for these 32 lines about open source software controls. Note that these controls need to be applied by software suppliers, so federal agencies should request that their suppliers implement them, regarding the open source components in their software. Everything NIST recommends in this section is good, but here are my two favorites:

Apply procedural and technical controls to ensure that open source code is acquired via secure channels from well-known and trustworthy repositories

This is a real problem. There have been a number of attacks in which a software supplier has downloaded a component containing a backdoor in place of a legitimate component – sometimes through typosquatting, sometimes through substitution inside the repository, and sometimes through other means – and then included it in their software. The supplier needs to take care when downloading.

Supplement SCA source code-based reviews with binary software composition analyses to identify vulnerable components that could have been introduced during build and run activities

There are a number of problems having to do with software security and SBOMs that don’t have any neat solution; this is one example. This requirement points to the problem caused by the following facts:

1.      There’s general agreement that the best time to create an SBOM for a software product is as part of the final build.

2.      However, when you go to install that product, a number of other items often get downloaded with it: installers, separate libraries that are utilized by the software (so-called “runtime dependencies”), a container, etc. If any of these items contains vulnerabilities, those vulnerabilities can do as much damage as if the product itself contains those same vulnerabilities. The SBOM created during the final build won’t show any of these components (or their vulnerabilities).

3.      A binary software composition analysis tool will identify many of the components in these other items and alert the supplier to vulnerabilities in them. When they learn about these vulnerabilities (hopefully prior to actually providing the full download to customers), the supplier should patch them or take some other measure to remove the risk posed by the vulnerabilities. Otherwise, the supplier’s promise that their software is free of vulnerabilities when downloaded is really made with their fingers crossed, since the full package that the user runs may contain vulnerabilities anyway.

4.      You might well ask, “If the SBOM for the installation files is more complete, why doesn’t the supplier provide that to their customers, instead of the SBOM created from the final build?” Good question. The answer is that the SBOM for the installation files has to be developed through binary analysis, and that’s inherently more prone to errors and omissions than a build that uses source code – so in some cases, providing the full installation SBOM might cause more confusion than it would alleviate. The supplier should use a binary analysis tool to create an SBOM for all of the installation files and use that to identify and remediate vulnerable components in the installation files. However, the decision whether or not to share the latter SBOM with a particular customer should be a joint decision by the supplier and the customer (many customers will be happy to receive just the build time SBOM, since that one is essential).

5.      What should the agency do if they ask the supplier for an “installation SBOM”, and the supplier refuses? If they’re involved in a high-assurance use case, they might want to procure their own binary analysis tool and create the installation SBOM on their own; or they might hire a consultant to do that. In non-high-assurance cases, the agency may decide they will accept the risk that comes with not knowing the components in the installation files, and therefore not being able to learn about their vulnerabilities.

 

 

 

Wednesday, December 15, 2021

Sometimes government regulation isn’t such a bad thing

My previous post pointed out a contradiction between two of the components of the draft IoT device labeling program that NIST is developing, which will be finalized in guidance due on February 6, 2022. Those components were NIST’s desire for a binary label (i.e. a seal of approval), and their desire to have outcomes-based criteria for that label, which IMO preclude a binary yes-or-no decision on which devices should receive the label.

But there’s another contradiction in what NIST is suggesting for the program, which is much more fundamental and has to do with how the program will be run. In the Discussion Draft they put out recently on this subject (and to some degree in the web workshop they had last week), they stated (page 1) their philosophy for the program very elegantly:

NIST will identify key elements of labeling program in terms of minimum requirements and desirable attributes – rather than establishing its own program; it will specify desired outcomes, allowing providers and customers to choose best solutions for their devices and environments. One size will not fit all, and multiple solutions might be offered by label providers.

What’s not to like about this? As a former student of Milton Friedman (a few years ago) at the University of Chicago, I’m all for letting markets determine for themselves how commerce should be conducted. But this assumes that no individual player in the market has the power to impose costs on other players, that can’t be fully compensated through the legal system.

Friedman had a great illustration for this principle: If a truck owned by Commonwealth Edison (the electric utility in Chicago, now part of Exelon, Inc.) hits my car, I have the right (or really my insurance company does) to sue them for the full damages; the damages can be clearly identified and quantified. However, if ComEd pollutes the air in Chicago (at the time they operated a couple big coal-fired plants just 4 or 5 miles from the U of C) and my daughter later develops asthma[i], it’s not at all clear that the asthma was ComEd’s fault.

Plus, ComEd can certainly argue that, while their coal plants might have increased my child’s chances of getting asthma, there are many other contributors to air pollution as well. Who could possibly sort all of these contributions out and determine the precise value of each source’s contribution to my daughter’s asthma? In cases like this, markets alone can’t assure fair outcomes for all participants. Government needs to step in.

In Friedman’s example, environmental regulations are required to prevent the pollution that could lead to my daughter getting asthma – and if ComEd doesn’t follow those regulations to a T, this fact allows me to sue ComEd for the cost of her asthma (although such suits are usually big class actions nowadays), without having to prove a specific connection between the pollution and the asthma. That is, regulations set boundaries for actors in free markets. To quote Robert Frost, “Good fences make good neighbors.”

This is all a roundabout way of saying that, while it’s great to set up a regulatory program - as NIST wants to do - in which the rules and their implementation are left up to the participants in the program, there needs to be some government entity that in the end ensures the rules are fair ones and they’re enforced fairly.

What bothered me about NIST’s idea for governance of the IoT device labeling program is that they seem to want to turn the entire program over to one or more “consumer labeling scheme owners”. How they’ll be chosen, how many there will be (one? Twenty?), as well as how the boundaries will be set between them, aren’t specified (at least they aren’t yet).

Even more importantly, the parameters of the program will be up to the scheme owners. To quote NIST (page 2), “The scheme owner would be responsible for tailoring the product criteria, defining conformity assessment requirements, developing the label and associated information, and conducting related consumer outreach and education.” In other words, there aren’t any real limits on the program a scheme owner can put together, other than that a) they have to use the criteria that NIST has identified (in the Discussion Draft), and b) the label needs to be binary. So what would happen if a scheme owner:

1.      Decides to assess devices against the criteria as leniently as possible (see my previous post, where I discussed the impossibility of performing a true binary assessment based on outcomes-based criteria), while at the same time charging high fees for assessments? The message will be clear to IoT device manufacturers: “The price this scheme owner charges me for the assessment is high, but I’m sure they’ll give me a label. It’s much better giving this scheme a shot, rather than going to a different scheme owner, who might really assess the device and not give me the label if they think I’m deficient.”

2.      Decides not to bother with the “consumer outreach and education” part? After all, the entities that are going to pay the fees to the scheme owner are the manufacturers, not the consumers. Given that there will probably be a lot of consumers who initially are willing to fork over an extra $10-$20 for a security camera that carries a cybersecurity label, why not charge a high assessment fee to manufacturers that see the importance of getting a label immediately, so they can sell to these early adopters? Then close up shop and let another scheme owner do the hard work of outreach and education, to grow the market further?”

3.      Decides to poach another scheme owner’s market? Let’s imagine that one scheme owner has had success providing labels for baby monitors; now they decide they ought to try their hand at a labeling scheme for smart appliances, despite not knowing anything about home appliances. But there’s already another scheme owner doing well in that market, and they’re not at all pleased at the idea of having an organization with no experience in appliances jump into that market and perhaps hand out meaningless labels, just so they can collect the assessment fees. NIST specifically points out that they don’t want to see consumers getting confused by multiple labels for similar products, but that’s exactly what would be the outcome in this case. Who is going to mediate this dispute?

During the web workshop, I asked in the chat how the scheme owners will make money. That didn’t get answered in the workshop, and it doesn’t seem to me that the NIST staff members who are developing the guidance for the IoT device labeling program have even asked themselves that question. But they really should. If you want to have private industry develop and run a regulatory program, you need to make sure you understand how they will make money doing this; if they can’t make money by what we’d consider “legit” means, they’ll find other means to make it. Those “other means” are likely to involve the scheme owners doing things that are unfair to other scheme owners, to manufacturers, and potentially to consumers as well, as shown in the three hypothetical examples above.

Yet preventing these abuses will require some power higher than the device owners to set and enforce rules. Let’s be clear: at a minimum, some government agency needs to police the device labeling program so that it operates fairly for all, even though the specific details of how the program will work are left up to the scheme owners.

In the web meeting, the presenters made clear – as I knew already – that NIST doesn’t run programs like this and isn’t a regulator. But how about another federal agency? In fact, the Executive Order, in the paragraph that orders the IoT device labeling program (which I quoted near the beginning of my last post), states that the Director of NIST should coordinate “with the Chair of the Federal Trade Commission” in developing the program and the criteria. Yet I haven’t seen or heard one word about the FTC, or any other agency, being involved with this program. Sure, NIST isn’t a regulator, but that doesn’t mean they can’t find a regulator within the government to set and enforce rules for this program.

So how about turning over implementation of the device labeling program to the FTC? They’re great at writing and enforcing rules. This doesn’t mean that NIST’s ideas need to go out the window. In general, I like the idea of having multiple scheme owners providing label schemes for different markets; and I also like the idea that the scheme owners should be free to construct their own programs.

But just like any other economic activity, there need to be government-enforced rules to make sure everyone is treated fairly. Anyone is free to build cars or trucks in whatever way they see fit. However, they all have to follow the safety standards set by the National Highway Traffic Safety Administration (NHTSA). When they advertise their car, they have to make true statements in their ads, or the FTC will come down on them. And when they communicate with shareholders, they need to follow the information disclosure guidelines set by the Securities and Exchange Commission (SEC). All this regulation obviously hasn’t prevented companies like Tesla and Rivian from producing innovative new cars and trucks that consumers and businesses want to buy.

So I like NIST’s ideas for allowing experimentation – and even competition - by the scheme providers. But make no mistake: This is a regulatory program. The EO ordered the government to develop a program to raise the level of cybersecurity in consumer IoT devices by creating a precious commodity – the label – that device manufacturers will desire because they know that a label will increase their sales. And even more importantly, not having a label might cause a manufacturer to go out of business.

NIST, please turn the implementation and operation of the IoT device labeling program over to a regulator to ensure it’s successful. I can guarantee it will fail if you don’t do that. Good fences make good neighbors.

P.S. My client Red Alert Labs will submit both this post and the previous one (slightly edited) to NIST as comments on the Discussion Draft paper.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] To be honest, Friedman didn’t talk about asthma as the consequence of pollution. He talked about someone’s shirt collar getting soiled. Unfortunately, Friedman tended to minimize damages that could be caused by businesses. This tendency got worse later on, especially in his Newsweek columns, where he started publishing ideas not based on economic analysis but on his personal opinions, while at the same time leaving the impression that they were based on such analysis (more specifically, the statements were based on assumptions that he didn’t make clear, such as the assumption that pollution, while definitely harmful, doesn’t really threaten health). Today, most of the politicians who quote him have no knowledge at all of what he wrote as an economist, and base their polemics entirely on those columns and a couple cringeworthy books like Free to Choose.

Monday, December 13, 2021

NIST’s hopefully-not-final ideas for an IoT device “labeling program”


I’ve been working with an interesting client since last spring: Red Alert Labs, a French company that specializes in helping IoT device manufacturers understand and comply with cybersecurity regulations and standards, as well as helping organizations that utilize IoT devices (of course, that’s just about every organization in the developed world) understand how they can do so securely.

In the course of that work, I’ve written several posts about what’s going on with IoT regulation in the US, the most recent of which is this one. In a nutshell, there are two current programs to regulate IoT in the US. One is the IoT Cybersecurity Improvement Act of 2020, while the other is the consumer IoT device labeling program, described in the Executive Order of this past May. Here is how the EO described the labeling program:

Within 270 days of the date of this order, the Secretary of Commerce acting through the Director of NIST, in coordination with the Chair of the Federal Trade Commission (FTC) and representatives of other agencies as the Director of NIST deems appropriate, shall identify IoT cybersecurity criteria for a consumer labeling program, and shall consider whether such a consumer labeling program may be operated in conjunction with or modeled after any similar existing government programs consistent with applicable law. The criteria shall reflect increasingly comprehensive levels of testing and assessment that a product may have undergone and shall use or be compatible with existing labeling schemes that manufacturers use to inform consumers about the security of their products. The Director of NIST shall examine all relevant information, labeling, and incentive programs and employ best practices. This review shall focus on ease of use for consumers and a determination of what measures can be taken to maximize manufacturer participation.

Does this give you a clear idea of what the program should be about? If you say yes, you either didn’t read this carefully or you’re a liar. This paragraph was deliberately made as vague as possible, in order not to constrain NIST in what they develop. That’s all well and good, but February 6 is the 270th day after the EO date, so by then, NIST has to have their final answers on both a) how the program will work, and b) what criteria will be used for the program.

NIST published draft criteria in August, and my guess is there wasn’t a lot of serious criticism of them (there was a comment period, and the comments – including the ones I helped prepare – seemed quite positive). They’re a subset of the criteria in NISTIRs 8259A and 8259B, which I think are excellent starting points for IoT device cybersecurity (NISTIR 8259A is about technical capabilities – or lack thereof – in the devices themselves, while 8259B is about the manufacturer’s practices and programs).

But besides publishing draft criteria, NIST was also required to draft how the consumer IoT device labeling program itself will work. That document appeared in early December, and was then discussed in a four-hour web meeting last Thursday (the recording should be available here by December 23). BTW, the webinar had 700 registrants from all over the world. I’ll bet the final number of viewers will be higher, after the recoding has been posted.

While the NIST presenters made a number of interesting individual points during the workshop, various issues were raised in the chat and as questions. Of course, the whole point of the workshop was to learn about what the issues were, and the staff seemed eager to hear about them. If you still want to comment, NIST will accept emailed comments – sent to labeling-eo@nist.gov - up until later this week (I believe the deadline is Thursday the 16th). Because NIST is under a very tight deadline for February 6 (the EO gave NIST a whole slew of deadlines for that date), NIST didn’t have a formal comment period this time.

While I have some minor issues with what NIST said, I have two major ones. These both have to do with what I see to be fundamental contradictions in what they’re proposing. If not addressed, I believe that either of these might sink the whole IoT device labeling program. I will address the first of those contradictions in this post, and the second in another post in the near future.

The first contradiction has to do with the type of label. This doesn’t mean whether it should be on white or yellow paper: NIST has already decided that there will be a small physical label attached to the IoT device (or perhaps included on a sheet of paper with the device), but that will have a URL or QR code that will take the consumer to a much more detailed discussion on the web.

NIST distinguishes between three different types of labels, in the answer to question 10 on pages 21 and 22 of the Discussion Draft: descriptive, graded, and binary. NIST defines them this way:

1.      A descriptive (or informational) label provides facts about properties or features of a product without any grading or evaluation.”

2.      “A tiered (or graded) label indicates the degree to which a product has satisfied a specific standard, sometimes based on attaining increasing levels of performance against specified criteria.”

3.      “A binary label (sometimes called a “seal of approval”) is a single label indicating a product has met a baseline standard. Examples include Energy Star [EPA], USDA Organic [USDA], and the government of Finland’s cybersecurity label [FINLAND].”

NIST indicates they have decided that a binary label is best, meaning the product meets the baseline criteria listed in the document (more on those in a moment); they set out their reasoning in the answer to question 11 on pages 22 and 23. While I’m not crazy about binary, up-or-down judgments in cybersecurity matters, there might be some cases where they would work – so for the moment, let’s just say I accept NIST’s decision.

However, this directly conflicts with another decision they have made: that the criteria should be “outcomes-based”, meaning that each criterion should list an outcome to be achieved, rather than a particular means of achieving an outcome (those of you who are veterans of the NERC CIP Version 5 Wars of the early part of the last decade – has it really been that long? – will remember lots of discussion of prescriptive vs. objectives-based requirements. Of course, objectives-based means just about the same as outcomes-based).

NIST didn’t just express a preference for outcomes-based criteria; they actually rewrote all of the criteria from the August paper as outcomes-based. For example, the August paper proposed this wording for the Logical Access to Interfaces criterion:

1. The ability to logically or physically disable any local and network interfaces that are not necessary for the core functionality of the product component.

2. The ability to logically restrict access to each network interface to only authorized persons or devices.

3. The ability of the product component to validate that the input received through its interfaces matches specified definitions of format and content.

4. The ability to authenticate individuals and other IoT product components using appropriate mechanism to technology, risk and use case. Authenticators could be biometrics, passwords, etc.

5. The ability to support secure use of authenticators (e.g., passwords) including:

a. if necessary, ability to locally manage authenticators

b. ability to ensure a strong, non-default authenticator is used (e.g., not delivering the product with any single default password or enforcing a change to a default password before the product component is deployed for use)

Of course, these sub-criteria are all quite measurable, but they don’t allow for a device to utilize other means to achieve the same goal of interface access control. Contrast the above to the wording for the same criterion in the Discussion Draft (although the criterion is now called Interface Access Control):

1. Each IoT product component controls access (to and from) all interfaces (e.g., local interfaces, network interfaces, protocols and services) so as to limit access to only authorized entities. At a minimum, the IoT product and its components shall:

a. Use and have access only to interfaces necessary for the IoT product’s operation. All other channels and access to channels are removed or locked down

b. For all interfaces necessary for the IoT product’s use, access control measures are in place (e.g., unique password-based multifactor authentication)

c. For all interfaces, access and modification privileges are limited for the interfaces and users of the interfaces

2. The IoT product, via some, but not necessarily all components, executes means to protect and maintain interface access control. At a minimum, the IoT product shall:

a. Validate data sent to other product components matches specified definitions of format and content

b. Prevent unauthorized transmissions or access to other product components

c. Maintain appropriate access control during initial connection (i.e., on-boarding) and when reestablishing connectivity after disconnection/outages

NIST has rewritten all of the other criteria from the August paper in the same way. Of course, I don’t object to outcomes-based criteria, and in fact I think that, in domains like cybersecurity and safety, they’re the only good option. However, it’s when you put outcomes-based criteria together with an up-or-down assessment program (in this case, a binary label) that you get a problem: The binary label essentially says, “This product[1] has met all of the baseline criteria established by NIST.” Someone  is going to have to make the decision whether or not this is true for a particular device, and that decision has to be up or down; it can’t be “Well, they did a good job in this area, but not such a great one in this other area.” How will the decision be made?

In the case of prescriptive criteria (like the criteria in the August paper), it should be fairly easy to make such an up-or-down decision. As I look at the five sub-criteria in the first example above (and the two sub-sub-criteria under sub-criterion 5), it seems to me that it shouldn’t be hard to come to an objective decision on whether or not a particular IoT product meets each of those criteria. For example, “The ability to logically restrict access to each network interface to only authorized persons or devices” could be assessed by a) identifying each network interface on the device (this would include both interfaces for network cabling, as well as wireless “interfaces” like a wi-fi radio), and b) verifying that the device allows access to each interface to be restricted only to authorized persons or devices.

Now let’s look at the first outcomes-based criterion from the Discussion Draft in the example above (the one starting “Each IoT product component…”), and specifically the three sub-criteria under it. Before I can determine whether a device satisfies those three sub-criteria, I need to answer a number of questions, including:

1.      How can any party but the device manufacturer itself determine which interfaces the device “uses and has access to”? That can’t be determined by looking at any physical characteristics of the interfaces. An assessor would need to examine the code that runs on the device. But even being able to look at the code won’t necessarily let an outsider determine whether a particular interface is used or not, unless they have experience writing device software. And let’s face it: examining all of that code would be a very expensive, time-consuming process. Plus this assumes that the developers of the software – usually not the device manufacturer – would be willing to provide the assessor with access to that code, in exactly the version that happens to be on the device. Oh…and don’t forget the developer of the real-time operating system (RTOS) that runs the device, like Microsoft or Wind River; they’ll need to fork over their code as well. Lots of luck getting that to happen.

2.      How can any party but the device manufacturer determine which interfaces are “necessary for the IoT product’s operation”? Even an examination of the code probably can’t answer this question. It really has to be the product’s designer, who presumably is an employee of the manufacturer. The designer might well be willing to answer this question for the assessor, but then the assessment becomes a self-attestation. There’s nothing wrong with self-attestations per se, but when an up-or-down binary label is on the line (and not receiving the label might have a serious negative impact on sales of the device), is it likely that the product’s designer will reveal anything that is going to jeopardize their being awarded the label? In other words, a binary label that depends on the self-attestation of the manufacturer isn’t worth the paper (or electrons) it’s printed on.

3.      In the third sub-sub-criterion, how does an assessor determine whether “access and modification privileges are limited”? Does “limited” mean nobody is allowed to make any change to the device at all? Or should some people be allowed to make some changes, and one person (the super user) can make any change they want? Or maybe anybody can make any change they want, except for say disabling the device altogether? These – and myriad other scenarios – could all be interpreted as “limited modification privileges”.

A binary label is a very powerful thing. Receiving it or not might in some cases mean the difference between commercial success and failure of a product. A manufacturer that doesn’t get the label is going to be quite unhappy and challenge how the assessor arrived at their judgment on each criterion. And that’s what this assessment will really come down to – judgment. The assessor is going to have to answer a huge number of questions like the three above. They won’t have any objective way to answer most of them (except in some cases by just appealing to the manufacturer for an answer, which then creates its own problems). They will have to use their judgment to answer those questions, so they can award (or not) the label.

What’s the bottom line on all this? If you want to assess based on very prescriptive criteria (like those from the August NIST document), then a binary label is achievable. But if you want outcomes-based (or objectives-based, and there are other terms as well) criteria, you are going to open up a can of worms if you want to assess those in a binary fashion. With outcomes-based criteria, I think there is no “objective” way to make an up-or-down decision on whether the device meets a particular criterion.

Frankly, for the device labeling program, I think an informative label is the only way to go. Yes, the majority of consumers will have no idea what all that information means for them. But IMHO, it’s much better to give at least a few consumers some good information that will let them make an informed decision, than it is to give them a thumbs up that in the end rests on subjective judgments, and will be subject to endless challenges, with no real resolution of those challenges.

If we were talking about clear, objective criteria, like whether the device is properly electrically grounded, there’s no question that binary is best; you don’t want to just give someone information and hope they make the right decision, when the device could electrocute them if they make the wrong decision. But cybersecurity ain’t that way, which is why there are very few cyber regulations that even call for up-or-down decisions. NIST would make a big mistake were they to try to implement a binary IoT security label.

P.S. In general, I think the only workable cybersecurity regulations are those that are risk-based – that is, the organization is required to implement a risk management program to cover a certain domain, such as supply chain security (as in NERC CIP-013) or software vulnerability management. The organization is judged on how well they’ve developed and implemented their program.

However, this was clearly not what the EO had in mind when it ordered labeling programs for consumer IoT devices and consumer software, and I agree it’s hard to see Joe and Mary Smith conducting a full cyber risk assessment before buying a baby monitor – although that would definitely be in order before say a military purchase of an IoT device.

But given that the EO did order a consumer IoT labeling program, I don’t see any good alternative to an informative label. Let’s save binary and graded labels for the commercial, industrial and government use cases, where they make some sense.

P.P.S. I have another important disagreement with NIST on the IoT device labeling program, which I will hopefully write about in the very near future.


Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[1] In NIST’s parlance, there’s a difference between an IoT device and an IoT product. The device is one component of a product, but another component is the software running somewhere in the cloud that interacts with the device, as is usually the case. A third component might be a hub through which the device connects to a network. The product consists of all three of these components.