NIST is required by last May’s Executive Order 14028 to develop
guidelines for federal agencies to comply with the software supply chain
provisions in Section 4 (e) by February 6, 2022. NIST was also required to
draft these guidelines by November. Rather than prepare this draft as a
separate document, NIST decided to kill two birds with one stone and include
its contents inside the draft r1 of the venerable SP 800-161 publication (this is the first revision
since it was developed in 2016).
When I learned that NIST had done this, I immediately downloaded the draft. My only concern at this point was the draft
guidelines on software bills of materials (SBOMs), which are found on pages 242-246.
I submitted comments to NIST on the SBOM guidelines. I’m reproducing them here,
along with some other material that I didn’t submit with the comments. I’ve reproduced
short passages from the NIST document in italics, with my comments on each
passage immediately below it. Keep in mind that the EO only applies directly to
federal agencies, so I’ve confined my comments to concerns that affect them
(although in practice, I doubt there’s much if any difference between what federal
agencies will have to do regarding SBOMs and what private sector organizations would
need to do).
Ensure that comprehensive and current SBOMs are available
for all classes of software including purchased software, open source software,
and in-house software, by requiring subtier software suppliers to produce,
maintain, and provide SBOMs (p. 243)
It is certainly a worthy goal that SBOMs be available for
more than purchased software. That is, whether an agency a) purchases software
from a commercial vendor, b) downloads an open source product to run on its
network, or c) develops software in-house, it should certainly require an SBOM.
But the only one of those items that is directly covered by the EO is software
from commercial vendors.
Another category that may or may not be covered by the EO is
intelligent devices (i.e. IoT and IIoT devices) and embedded systems. These
aren’t included in the NIST
definition of “critical software” (which was required by the EO), but I
think they should be. When you think of infusion pumps in hospitals, electronic
relays that operate the power grid, and PLCs that control operations in oil
refineries, among lots of other devices, these are simply too important to
ignore. In any case, there should be SBOMs for devices purchased by federal
agencies, given their importance.
It would also be wonderful to have SBOMs for open source
software, but a federal agency has close to zero control over the online
communities that develop it. If they feel like developing an SBOM, they’ll do
it, and if they don’t, they won’t. Plus there’s little likelihood that an open
source community will develop a new SBOM whenever the software changes in any
way, as advised by the NTIA documents.
An agency shouldn’t be found non-compliant with the EO
because it failed to require an SBOM for an open source product it downloaded
for its own use. What would be more constructive would be a program by NIST or
another government agency to develop SBOMs for widely used open source
software, since currently I know of no entity that does that, other than
perhaps as a consulting project.
Also, this sentence seems to be saying that the way to get
SBOMs for all classes of software is to require the suppliers to lean on the first-level
component suppliers (which I assume is what is meant by “subtier software
suppliers”) to provide SBOMs for the components. The two issues have nothing to
do with each other, so this sentence doesn’t make sense overall.
However, the question of component tiers comes up regularly.
How many component tiers should you require the supplier to provide in their
SBOM? In general, my feeling is that, as long as a supplier can at least
include most of the “first-tier” components in their SBOM, that in itself is an
achievement (and since there can often be thousands of first-level components,
that in itself can be a big challenge).
Note from Tom 2/5/22: Since I wrote this post in December, I've come to realize that it doesn't make a lot of sense to talk about "tiers" of components in a software product. This is because Component A could be a dependency of Component B, while a different instance of B could be a dependency of A - i.e. a circular reference. This is why CycloneDX only displays one "tier" of components at a time (i.e. the direct dependencies of the product itself, or of one of the components of the product).
However, because the components can be "nested" in the SBOM, the entire "dependency graph" (i.e. all tiers) can be built up by including under every component its list of direct dependencies. This is a way of breaking the circular reference problem. In fact, the name CycloneDX is somehow a play on words on "circular reference", according to Steve Springett, the co-leader of the OWASP group that maintains and enhances the format. I'll also note that one great feature of CycloneDX (or CDX) is that, if you decide to move (or duplicate) a component within the overall dependency graph, all of the dependencies of the component will follow along with it.
Since the risk posed by each tier diminishes the farther
away it is from the first tier (i.e. a component vulnerability is less likely
to be exploitable in the product itself, the more tiers there are between the
component and the first tier), it’s certainly no tragedy if suppliers just list
the first tier of components. Getting SBOMs for second or greater tier
components is notoriously difficult (in part because 90% of components are open
source, so there’s no supplier to lean on for an SBOM – but also because the
supplier of the primary product isn’t the customer of say a second-tier
component supplier. Therefore, the primary supplier has to require the
first-tier component supplier to lean on the second-tier suppliers, etc. That
becomes very difficult and time-consuming, and won’t work at all in the case of
open source components). Requiring suppliers just to obtain even the second
tier will place a huge burden on them, yielding minimal security benefit.
Maintain readily accessible SBOM repositories, posting
publicly when required (page 244)
I’m pleased to see NIST requiring this. For several reasons,
I think posting SBOMs is a lot better way of making them available, than trying
to push a copy of every SBOM to every customer. Of course, many or most
suppliers may want to post the SBOMs on an authenticated portal, rather than
make them truly publicly available; other suppliers may be willing to post them
publicly. That decision is up to the supplier to make.
Incorporate artificial intelligence and machine learning
(AI/ML) considerations into SBOMs to monitor risks relating to the testing and
training of datasets for ML models (page 244)
There are certainly ways in which AI can help in creating
SBOMs, especially in addressing the “naming problem”. However, not much work has
been done so far on this idea, and none at all by the NTIA Software Component Transparency
Initiative. Requiring it for EO compliance is unhelpful and places an
unnecessary burden on the suppliers.
Develop risk monitoring and scoring components to
dynamically monitor the impact of SBOMs’ vulnerability disclosures to the
acquiring organization. Align with asset inventories for further risk exposure
and criticality calculations. (page 244)
This is an important requirement. However, this is part of a
section that starts with “Departments and agencies, where possible and
applicable, should require their suppliers to demonstrate they are
implementing, or have implemented, these foundational SBOM components and
functionality along with the following capabilities” (the three passages above
are also part of that section).
The problem is that this passage isn’t a requirement for
suppliers, it’s a requirement for the “departments and agencies” themselves.
The supplier can’t help with these steps, and the burden of executing them
shouldn’t fall on the suppliers. So this requirement needs to be put in a different
section.
That being said, I wish to point out here that this is the only requirement in the entire SBOM section that relates to the agencies using the SBOMs. Every other requirement specifies something that the agency needs to ask their suppliers to do or provide. I hope NIST doesn’t expect the agency to simply drop an SBOM in the bit bucket as soon as they receive it – they need guidelines for how they will use it.
The above passage is simply a very high-level formulation of a subset (albeit an important one) of the uses to which a federal agency can put an SBOM, to help it manage software supplier chain cyber risk. It is like, if I were asked to write up guidelines for safe driving, I simply wrote "Avoid accidents".
With the EO raising the bar for software verifications
techniques and other software supply chain controls, additional scrutiny is
being paid upon not just the software the vendors produce, but the business entities within a given software supply chain that
may sell, distribute, store, or otherwise have access to the software code
themselves. Departments and agencies looking to further enhance assessment
of supplier software supply chain controls can perform additional scrutiny on vendor
SDLC capabilities, security posture, and risks associated with foreign
ownership, control, or influence (FOCI).
Wow! Where to begin on this one?
First, what does it mean to say “business entities within a given software supply chain that may sell, distribute, store, or otherwise have access to the software code themselves”? Once the code is compiled – which is done by the original supplier – no other entity has access to the code, unless they first decompile it (which they're specifically prohibited from doing, as is the customer - and in any case, decompilation is hard to do, and can never be done perfectly). In other words, the first sentence of this section makes literally zero sense, and can be safely ignored.
The fact is that the supply chain between the supplier and the customer poses about zero cyber risk - in fact, usually the actual bits that constitute the software are transferred directly to the customer when they download it. If there is an intermediary, e.g. a Microsoft dealer, they simply invoice the customer and transfer the license.
This isn’t to say that some organization within the supply
chain couldn’t substitute a malicious product for a non-malicious one, or perhaps point the customer to a bogus download site that would provide them a malicious product. But that, like any other software download, should be protected by confirming the digital signature and in some cases a hash value.
Regarding FOCI (foreign owned and corrupt influence), I
understand there are certainly legitimate reasons for raising FOCI concerns,
when the issue is hardware components of an intelligent device. But for
software components, FOCI has almost no meaning. Consider the following:
1.
As I’ve already said, 90% of software components
are open source. Open source communities don’t restrict their contributors to
particular countries – in fact, they don’t even have a good way to verify in
which country a participant resides. One small proprietary study of open source
components in a major software product that I recently saw showed that every
one of the communities behind those open source components contained at least one member from Russia, China or Iran. Probably this is very
typical for open source components. Should you remove all software from your
network that has “tainted” open source components like these? My guess is you may end up severely limiting your
software procurement options if you do.
2.
And even commercial software is developed with
teams from all over the world. As Matt Wyckhouse of Finite State pointed
out last year, Siemens has 21 R&D hubs in China and employs about 5,000
Chinese staff there. Does this mean you should dump all your Siemens
software, as well as the Siemens hardware that runs Siemens software?
3.
And since open source software usually lists its
contributors by email address, how do you know that someone from say Cuba isn’t
using a Gmail account?
This isn’t to say that, in cases where there might be some
reason to be suspicious of the components in a particular product, it isn’t
worthwhile for a software customer to conduct a FOCI investigation of software. It
also isn’t to say that, if there is a service available that examines FOCI of
software components, it wouldn’t be worthwhile for the customer to utilize that
service. But it would certainly be a bad idea to make a rigid rule like “We
will not purchase any software that contains any open source component that
includes a participant with a Chinese email address.”
Include flow-down requirements to sub-tier suppliers in
agreements pertaining to the secure development, delivery, operational support,
and maintenance of software. (page 245)
In my opinion, this requirement will cost suppliers a lot of
money and effort, while producing very little benefit for their customers, the
federal agencies. Let’s start with the fact that 90% of components are open
source, and the software supplier has literally zero control over the terms
under which they acquire open source components.
Regarding the remaining 10% of components, the product supplier does usually have agreements
with commercial component suppliers, and they can certainly ask those suppliers
to agree to meet certain criteria for development, delivery, etc. But what if a
component supplier refuses? Their product is usually low cost and is used in
thousands of products. If one of their customers, a final product supplier who
is acting at the request of a federal agency, asks them to meet these criteria,
they might well decide that the additional paperwork, etc. outweighs whatever
they make from that customer.
The final product suppler will then have to tell the federal
agency (their customer) that they’re unable to reach an agreement with the one
component supplier. What should the agency do at this point? Drop this supplier
altogether? Give them an ultimatum to replace the component made by the
recalcitrant component supplier with one from a more amendable component supplier,
despite the fact that components are seldom direct replacements for one another,
and there will often not be any alternative component supplier?
The sentence listed above needs to be qualified with “If
possible”, “If applicable”, etc. It shouldn’t be left as an absolute
requirement.
Automatically verify hashes/signatures infrastructure for
all vendor-supplied software installation and updates (page 245)
There can be many speed bumps in the road to automatic
verification of hashes and signatures – in fact, the NTIA Initiative has spent
endless hours discussing problems with creating and verifying component hashes,
and is still nowhere near reaching a conclusion (there might be a daylong technical
meeting next year on this subject, where the participants would be locked in a
room – difficult to do for a virtual meeting, I’ll admit – until they come to
some sort of agreement on this subject). The words “try to”, “where technically
feasible”, “where possible”, etc. need to be included in this passage as well.
Ensure suppliers attest to and
provide evidence of utilizing automated build deployments, including
pre-production testing, automatic rollbacks, and staggered production
deployments (page 245)
It is good if a supplier has all of these capabilities in
place, but not all of them do. Is this an absolute requirement, meaning that no
federal agency should ever procure software from a supplier who doesn’t meet
each of these conditions? I for one believe that the incremental security
benefit that would be gained from leaving this as an absolute requirement is
more than offset by the fact that agencies would have to abandon suppliers they
may be quite comfortable with, or whose product is unique in meeting a
particular need of theirs. This requirement should be qualified with “Where
possible”, as in the previous cases.
Lines 8461 to 8482 (page 246)
As unaccustomed as I am to saying nice things, I want to
applaud NIST for these 32 lines about open source software controls. Note that
these controls need to be applied by software suppliers, so federal agencies
should request that their suppliers implement them, regarding the open source
components in their software. Everything NIST recommends in this section is
good, but here are my two favorites:
Apply procedural and technical controls to ensure that
open source code is acquired via secure channels from well-known and
trustworthy repositories
This is a real problem. There have been a number of attacks
in which a software supplier has downloaded a component containing a backdoor
in place of a legitimate component – sometimes through typosquatting, sometimes
through substitution inside the repository, and sometimes through other means –
and then included it in their software. The supplier needs to take care when
downloading.
Supplement SCA source code-based reviews with binary
software composition analyses to identify vulnerable components that could have
been introduced during build and run activities
There are a number of problems having to do with software
security and SBOMs that don’t have any neat solution; this is one example. This
requirement points to the problem caused by the following facts:
1.
There’s general agreement that the best time to
create an SBOM for a software product is as part of the final build.
2.
However, when you go to install that product, a
number of other items often get downloaded with it: installers, separate
libraries that are utilized by the software (so-called “runtime dependencies”),
a container, etc. If any of these items contains vulnerabilities, those
vulnerabilities can do as much damage as if the product itself contains those
same vulnerabilities. The SBOM created during the final build won’t show any of
these components (or their vulnerabilities).
3.
A binary software composition analysis tool will
identify many of the components in these other items and alert the supplier to
vulnerabilities in them. When they learn about these vulnerabilities (hopefully
prior to actually providing the full download to customers), the supplier
should patch them or take some other measure to remove the risk posed by the
vulnerabilities. Otherwise, the supplier’s promise that
their software is free of vulnerabilities when downloaded is really made with
their fingers crossed, since the full package that the user runs may contain vulnerabilities
anyway.
4.
You might well ask, “If the SBOM for the
installation files is more complete, why doesn’t the supplier provide that to
their customers, instead of the SBOM created from the final build?” Good
question. The answer is that the SBOM for the installation files has to be
developed through binary analysis, and that’s inherently more prone to errors
and omissions than a build that uses source code – so in some cases, providing the
full installation SBOM might cause more confusion than it would alleviate. The
supplier should use a binary analysis tool to create an SBOM for all of the
installation files and use that to identify and remediate vulnerable components
in the installation files. However, the decision whether or not to share the
latter SBOM with a particular customer should be a joint decision by the supplier
and the customer (many customers will be happy to receive just the build time
SBOM, since that one is essential).
5.
What should the agency do if they ask the
supplier for an “installation SBOM”, and the supplier refuses? If they’re
involved in a high-assurance use case, they might want to procure their own binary
analysis tool and create the installation SBOM on their own; or they might hire
a consultant to do that. In non-high-assurance cases, the agency may decide they will accept the risk that comes with not knowing the components in the installation
files, and therefore not being able to learn about their vulnerabilities.
No comments:
Post a Comment