Tuesday, May 30, 2023

How will the NVD survive? Will it survive?

 

The discussions of weighty fiscal matters in DC over the weekend coincided with my working on specifying what exactly we (the SBOM Forum) would like to see happen to the National Vulnerability Database (NVD).

Of course, we all wish it well, but I have to admit that, given the spending compromise that emerged over the weekend and the fact that most of the burdens from that compromise will fall on the non-defense, non-entitlement sections of the budget, the outlook for the NVD isn’t brilliant (to paraphrase Casey at the bat).

At best, the NIST NVD team will limp along for the next two years in their current uncertain state. As I mentioned in a previous post, for the second year in a row, the NVD team at NIST hasn’t yet – as of May - received the full amount they were supposed to get starting in January of this year. Last year, they didn’t get it until July. Will they get their full funding at all this year, given that the whole funding picture for NIST has probably changed for the worse because of the upcoming freeze? Because of the delay in receiving their funds, six of the 25 NIST employees allocated to the NVD already have to work “temporarily” on other projects that have full funding).

Fortunately, I don’t think the NVD will go away. But it’s also certain that their travails aren’t going to end anytime soon, meaning any significant improvements in the NVD may be pushed back off the table for now.

I believe that the CPE problems described in Section A of this document can be addressed regardless of funding levels for the NVD – but that’s only because I believe CVE.org may be able to step in and take steps that the NVD itself can’t take (CVE.org runs the CNA program and takes in all of the CVE reports. These then “flow down” to the NVD. CVE.org is a mixed government/private group, funded by CISA/DHS, that oversees the work of the contractors from MITRE who operate the CVE program. See the diagram of how the NVD works at the bottom of this post). However, the other problems described in that document probably can’t be addressed without additional funding from some source.

And that’s where the global database idea comes in. This weekend’s agreement means that the NVD’s funding is likely to be precarious at best for many years. But, even if the NVD didn’t have serious problems now, we would be remiss if we weren’t at least starting to lay the foundation of a global database. There are three big drivers for this:

1.      The EU’s proposed Cyber Resilience Act, which will almost certainly be approved by the European Parliament this year or next and will start to come into effect almost immediately upon approval. The CRA will require that all software and hardware products with “digital elements” sold in the EU (no matter who sells them or where they’re made) follow a strict set of cybersecurity standards. The standards will require protecting against software vulnerabilities, not just at the time of sale but thereafter, as long as the product is supported. The potential penalties are huge.

2.      The CRA is likely to provide a big boost to SBOM production by suppliers worldwide. Even though it doesn’t directly require that SBOMs be distributed to customers, it makes clear that SBOMs are a good security practice. For example, if a product is responsible for a breach and the supplier can’t produce an SBOM that describes the product at the time of the breach, it probably won’t go well for them.

3.      Even without the CRA, suppliers are making heavy use of SBOMs now (although they’re hardly distributing them to their customers at all – mostly because the customers aren’t asking for them). Managing component vulnerabilities identified through SBOMs requires a huge amount of vulnerability database usage. As I’ve mentioned often, just the single open source software composition analysis tool Dependency-Track is now used over ten million times every day to look up vulnerabilities for components identified in SBOMs. And this is almost entirely due to use by developers; once software customers start receiving and using SBOMs for component vulnerability management purposes, their usage of vulnerability databases will ultimately dwarf usage by developers.

In other words, a tsunami of vulnerability database usage is on the way and it’s going to be worldwide, not just in the US or even North America. There will need to be a truly global vulnerability database (in fact, that’s the name we’re now working with – GVD) that is supported worldwide and run by a truly international organization. It will certainly draw on what’s best about the NVD (including the 200,000+ CVE records on which the NVD is based), but it will need to be built on a new foundation.

Sounds far-fetched? Have you ever heard of or used DNS? More specifically, do you think there are many single days when you don’t – usually without thinking about it – use DNS hundreds or thousands of times a day? If so, you’re probably wrong.

DNS was dreamed up by Paul Mockapetris in 1983, but do you know who the first domain registrar was? The NTIA – yes, the same group (part of the Dept. of Commerce) that played a big role in developing the SBOM concept. They ultimately turned the registrar job over to the Internet Assigned Numbers Authority (IANA), which continues to manage DNS today – with a $100 million annual budget. The IANA (part of ICANN) is a truly international organization, funded by governments and private organizations worldwide. The GVD will probably need to be housed by a similar organization, either an existing or a new one.

At the moment, the need is to talk with organizations and governments worldwide to learn what they would like to see in the GVD, and what they might contribute to fund it. Armed with that knowledge, we can then develop a realistic design, both for the near and longer terms. The SBOM Forum is talking with a security-focused international nonprofit now about funding this Discovery and Design effort.

How the NVD works


                   By Rube Goldberg - Originally published in Collier’s Magazine, September 26 1931

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


Tuesday, May 23, 2023

Is it time to “declare victory and leave” SBOMs?

                       

                              

The “fall of Saigon”, April 29, 1975

In 1966, as the US rapidly increased its involvement in the Vietnam War (or the “American War”, as it’s known in Viet Nam), Senator George Aiken made a statement about what the US could do to extricate itself from the conflict - without being seen to abandon our allies, the South Vietnamese government. While the statement was long and nuanced, it got shortened in the popular imagination to something like “declare victory and leave”. And in 1975, we did exactly that – minus the “declare victory” part, of course.

I was reminded of that statement while participating in editing one of the documents being produced by the CISA SBOM working groups (in this case, the document was about VEX). I pointed out to one of the leaders of the VEX group in a comment that there are currently no consumer tools that can produce what I consider to be the Holy Grail of the SBOM vulnerability management use case: a continually updated list of exploitable component vulnerabilities in a particular version of a software product, based on analysis of SBOM and VEX documents. That means that the main tool available for using SBOM and VEX documents currently is a JSON reader, which will simply display the content of the document in English.[i]

What bothered me was that this person – a leader of more than one of the CISA working groups – thought it was OK that most software users aren’t going to be able to do anything more with SBOMs and VEXes than read them in English.

Of course, it’s nice to be able to read machine-readable documents in English – don’t get me wrong. However, I consider that an admission of defeat. If the only thing the supplier is going to achieve by producing a machine-readable SBOM or VEX document (whether in JSON or XML) is to let their customers read it in English (or whatever language they know how to read), why are they bothering to put out the document in the first place? Instead, the supplier could implement the following high-tech solution for getting SBOM/VEX information out:

1.      Put the SBOM or VEX information (regarding a particular product/version) in a single English Word™ document;

2.      PDF the document;

3.      Create an email addressed to all customers of the product; and

4.      Hit Send (this is the part I sometimes forget).

Can you remember that? I can assure you it’s a lot easier than what you have to do to create a machine-readable SBOM or VEX, only to send it out to your customers and have them read the document in a JSON reader as if it were a…well, a PDF file.

Why is it that we don’t have consumer tools that ingest SBOMs and VEX information and output lists of exploitable component vulnerabilities in a product/version? Is it really that hard to do this? The answer is no, it isn’t that hard to do it in principle. However, in practice there are two serious problems that are gumming up the works for anyone who’s even thinking of creating a consumer tool today. These are the naming problem and confusion over VEX. While neither of these problems can be “solved” in the near term, they can both be improved upon to the point where I think real SBOM/VEX consumption tools will be possible by the end of this year.

However, note I said “tools” will be possible, not “consumer tools”. The latter have to be brought to the level at which they can be used by people (such as me) who don’t particularly enjoy having to figure out tough technical problems on a daily basis – as would currently be the case with an SBOM/VEX consumer tool, due to both the naming problem and VEX confusion.

Fortunately, I believe that software users who are concerned about cyber threats lurking in components of the software products (and intelligent devices) they depend on won’t be faced with the stark choice between not knowing anything about those threats and having to devote significant amounts of time every day to troubleshooting various open source tools they’ve assembled, while trying all along to get the tools to play nicely with each other.

Instead, I think that third party service providers will appear (perhaps as transformations of existing vendors) that will a) have the expertise and time required to assemble, operate and support the open source toolchains needed to achieve the “holy grail” service I described above, and b) be able to develop a large customer base, so that the time and money required for this service can be easily amortized over many users.

I think that at least one or two of these service providers will be serving customers by the end of the year. Moreover, I believe that for the next 2-3 years, it is unlikely there will be any complete consumer SBOM/VEX consumption tools available. Instead, the service providers will be the only way that software users can track component vulnerabilities in products they utilize.

Let’s have no more talk of JSON readers. They smell of defeat.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] There are multiple tools, including Dependency-Track, that can ingest an SBOM and look up vulnerabilities for components in a vulnerability database like the NVD or OSS Index. However, no tool takes the next step and utilizes VEX information (contained either in one of the two current VEX document formats, or in the VEX API that I’m pushing and hope will be available by the end of this year) to distinguish the approximately 5% of component vulnerabilities that are exploitable from the 95% that are not.

Thursday, May 18, 2023

A big IoT cybersecurity announcement is coming next month


Executive Order 14028[i], issued in May 2021, required NIST to develop a “cybersecurity labeling program” for IoT devices. The program was never intended to be regulations, but rather a certification issued by one or more private sector organizations, which would be sponsored and encouraged by the federal government. While there has been a lot of uncertainty about when the program will be announced and what it will include, it seems that will soon go away.

In June, there will be a White House announcement about the program. Here is some information I currently have on it:

1.      The basis for all certifications will be NIST.IR 8425, which is discussed in detail below.

2.      It appears that the program will be run by the Consumer Technology Association (which runs the huge Consumer Electronics Show every year).

3.      Some organizations that currently offer cybersecurity certifications for IoT devices will be empowered to issue their own labels, but they will have to at least address the guidelines in NISTIR 8425. One of those organizations is the IoXT Alliance. One of my clients, Red Alert Labs in France, is one of the small number of organizations that has been certified as an assessor for IoXT.

Of course, I expect the announcement in June will provide a lot more information on the program. In the meantime, both IoT device manufacturers and IoT device users (which includes almost every business, and a good percentage of the people, on the planet) should take a look at NIST.IR 8425, which I think is a very good document. I wrote a post on it after it came out last fall, but below is a larger discussion of what’s in the document.

NIST.IR 8425 was released in September 2022. It appears to be the last NIST document regarding IoT device cybersecurity, at least in the near term. The NIST.IR 8259 series (including NIST.IR 8259 as well as 8259A through 8259D, all of which were developed in 2020) provides a larger set of IoT guidelines than what is included in this document. NIST.IR 8425 contains a subset of what’s in the 8259 series, which were selected with a lot of input from the software community in 2021.

NIST.IR 8425 is intended to be the set of guidelines for the device labeling program, but this is clearly meant to be a baseline for all IoT. It consists of selections from NIST.IR 8259A (“IoT Product Capabilities”, or technical requirements) and NIST.IR 8259B (“Product Developer Activities”, or manufacturer policies and processes). The specific selections from these two documents are listed on page 11 of NIST.IR 8425.

There are many things to like about NIST.IR 8425. One is that, unlike the earlier NIST documents on the “consumer” IoT device labeling program, this document is clearly aimed at IoT products for businesses as well as households. This is emphasized in the Abstract on page i. The first sentence states that the publication “…identifies cybersecurity capabilities commonly needed for the consumer IoT sector (i.e., IoT products for home or personal use)”. However, the second sentence reads, “It can also be a starting point for businesses to consider in the purchase of IoT products.”

NIST has recently stated that there will later be a set of IoT guidelines that is aimed at “enterprise” devices – presumably devices that are aimed at large businesses and government agencies. If so, those requirements will likely be selected from NIST.IRs 8259A and 8259B. NIST.IR 8425 should be seen as the initial step toward compliance with the “enterprise” guidelines, when and if they appear in the future. In the meantime, 8425 provides a good set of baseline requirements.

The remainder of this post provides comments on many of the requirements and subrequirements in NIST.IR 8425. However, these are only a minority subset of the total requirements. The reader is encouraged to read the whole document.

IoT Product Capabilities

Asset Identification (page 5)

“The IoT product is uniquely identifiable and inventories all of the IoT product’s components.

1. The IoT product can be uniquely identified by the customer and other authorized entities (e.g., the IoT product developer).

 2. The IoT product uniquely identifies each IoT product component and maintains an up-to- date inventory of connected product components.”

By “components”, NIST is referring to the four things that are part of what they call an IoT “product”. They include:

1.      The device itself

2.      Any hardware that’s required to be used with the device, like an intelligent hub

3.      Any software interface to the device, like on a smartphone

4.      The cloud backend

This requirement states that a) the “product” needs to be uniquely identifiable by users, although users are almost always just going to point to the device itself, not the cloud backend or the smartphone app (even though NIST considers these to be just as much part of the IoT product as the device itself); and b) the “product” always maintains an inventory of its components, although a footnote points out that the inventory can be installed on any of the components – such as the cloud backend.

Product Configuration (page 6)

“The configuration of the IoT product is changeable, there is the ability to restore a secure default setting, and any and all changes can only be performed by authorized individuals, services, and other IoT product components.

1. Authorized individuals (i.e., customer), services, and other IoT product components can change the configuration settings of the IoT product via one or more IoT product components.

2. Authorized individuals (i.e., customer), services, and other IoT product components have the ability to restore the IoT product to a secure default (i.e., uninitialized) configuration.

3. The IoT product applies configuration settings to applicable IoT components.”

Perhaps the most important of these three items is number 2. The product needs to be capable of being restored to its secure default configuration, and both services and other components of the product need to be able to do this. While this is essential for real “consumer” devices, business users may not want this capability, since a rogue service might reset the device when it’s performing an important function. Business users will probably want to be able to turn this capability off, at least some of the time.

Data Protection (page 7)

“The IoT product protects data stored across all IoT product components and transmitted both between IoT product components and outside the IoT product from unauthorized access, disclosure, and modification.

1. Each IoT product component protects data it stores via secure means.

2. The IoT product has the ability to delete or render inaccessible stored data that are either collected from or about the customer, home, family, etc.

3. When data are sent between IoT product components or outside the product, protections are used for the data transmission.”

Many people, when they think of protecting data stored in an IoT product, just think of protecting data stored in the device itself. However, it’s equally important to protect data stored in the cloud backend, the smartphone app, and any other components. It’s also important to protect data as it’s transmitted between the components.

Interface Access Control (page 8)

“The IoT product restricts logical access to local and network interfaces – and to protocols and services used by those interfaces – to only authorized individuals, services, and IoT product components.

1. Each IoT product component controls access to and from all interfaces (e.g., local interfaces, whether externally accessible or not, network interfaces, protocols, and services) in order to limit access to only authorized entities. At a minimum, the IoT product component shall:

a. Use and have access only to interfaces necessary for the IoT product’s operation. All other channels and access to channels are removed or secured.

b. For all interfaces necessary for the IoT product’s use, access control measures are in place (e.g., unique password-based multifactor authentication, physical interface ports inaccessible from the outside of a component).

c. For all interfaces, access and modification privileges are limited.

2. Some, but not necessarily all, IoT product components have the means to protect and maintain interface access control. At a minimum, the IoT product shall:

a. Validate that data shared among IoT product components match specified definitions of format and content.

b. Prevent unauthorized transmissions or access to other product components.

c. Maintain appropriate access control during initial connection (i.e., onboarding) and when reestablishing connectivity after disconnection or outage.”

Often, a device manufacturer will consider it enough protection if they simply instruct users to disable any physical or logical interfaces that they don’t need. But that doesn’t allow for the idea that users of some devices may need to utilize certain interfaces at some times but not others. Having that capability requires being able to control access by interface.

The fact that this requirement is here shows that this document isn’t just for pure “consumer” products. The owner of a baby monitor shouldn’t need to be able to control access at the interface level. But a business might very well need to do that.

Software Update (page 9)

“The software of all IoT product components can be updated by authorized individuals, services, and other IoT product components only by using a secure and configurable mechanism, as appropriate for each IoT product component.

1. Each IoT product component can receive, verify, and apply verified software updates.

2. The IoT product implements measures to keep software on IoT product components up to date (i.e., automatic application of updates or consistent customer notification of available updates via the IoT product).”

Item 1 is important. It says that each component – the device itself, the cloud backend, and the smartphone app – must be able to verify and apply its own updates; no component should have to depend on another component to update itself, since communication among the components can never be taken for granted.

Cybersecurity State Awareness (page 10)

“The IoT product supports detection of cybersecurity incidents affecting or affected by IoT product components and the data they store and transmit.

1. The IoT product securely captures and records information about the state of IoT components that can be used to detect cybersecurity incidents affecting or affected by IoT product components and the data they store and transmit.”

This might seem like an odd requirement, since it says the product needs to capture information about the state of IoT components, but it doesn’t even attempt to say anything about what that information is. A footnote at least gives some examples:

Information about the state of IoT components that would be useful to detecting cybersecurity incidents is highly contextual to the IoT product, its components, and its operation. In most cases, temporal information such as time-stamp or location data (digital or physical) should be captured. Software and hardware version and operational state (e.g., known fault or exception thrown) may help detect cybersecurity vulnerabilities (e.g., specific software or hardware may have known vulnerabilities). Cybersecurity state information may also contain records of commands and actions received by and executed by the IoT product or other data that is meaningful to the IoT product and how it works, and are therefore useful to detecting incidents.

Clearly, this is something that needs to be configurable by the integrator of the device, although probably not by the end user. 

IoT Product Non-Technical Supporting Capabilities

Until this point, all of the items have been “IoT Product Capabilities”. The remaining items are all classified as “IoT Product Non-Technical Supporting Capabilities”. That is, these relate to the capabilities of the manufacturer or integrator of the product, not to the capabilities of the product itself.

Documentation (page 11)

This is a long section and lists over 30 individual topics that need to be addressed in documentation by the manufacturer. Of course, if any of these topics is not relevant to the type of device in question, the manufacturer should indicate that the topic is not applicable. Below are topics I consider particularly relevant:

Item 1.a.vi. (page 11): (The documentation must address) The IoT product developer’s assumed cybersecurity requirements for the IoT product.”

The developer’s “assumed requirements” should include everything in this document, but they could always include more. For example, NIST.IR 8425 includes a subset of the requirements in two larger documents: NIST.IR.8259A and NIST.IR.8259B. A manufacturer that wants to go beyond what NIST.IR 8425 requires might decide to follow most or all of the 8259A and B requirements.

Item 1.d.iv. (page 12): (The documentation must address) Consideration for the known risks related to the IoT product and known potential misuses.”

This is very important. It implicitly acknowledges that, with IoT products being put to such diverse uses, there’s no way that a single set of requirements can address all known risks to the product. On the other hand, NIST wants manufacturers to acknowledge any product-specific risks they know about, even if they’re not fully mitigated. By including this requirement, NIST is requiring the manufacturer (or integrator) to point out product-specific risks to users. (My opinion is that, if the manufacturer has already mitigated a risk, they shouldn’t feel obligated to point it out, unless there remains some residual risk that needs to be mitigated. And any serious risk shouldn’t be pointed out in documentation, but perhaps in private verbal communication with users. Of course, if there really is a serious risk in the product, for which there is currently no mitigation available, why is the manufacturer putting the product on the market at all, except because…no, that can’t be…they prioritize making money over the security of their customers! What a ridiculous idea…)

Item 1.d.v. (page 12): “(The documentation must address) Secure software development…practices used.”

It would be wise for developers to say they’ve followed NIST’s new Secure Software Development Framework (SSDF), although they will need to be prepared to show how they’ve followed it. As with any NIST framework, the manufacturer needs to perform a risk assessment, which determines which of the items in the framework they will address and which they will not address. However, the developer needs to be able to state that they have at least considered every item in the framework, and provide a reason why they have not addressed any particular item.

Item 1.d.v. (page 12): (The documentation must address)…supply chain practices used.”

This phrase refers to the practices the manufacturer follows with respect to their own suppliers – i.e., the developers of the hardware and software included in the device. It would be best if the manufacturer could indicate they follow a particular framework of supply chain practices. NIST’s primary supply chain security format is NIST SP800-161, which is far too detailed for this application. However, the NIST Cyber Security Framework (see below) describes a good set of supply chain security practices, which might be appropriate in this context.

Item d.vi. (page 12): (The documentation must address) Accreditation, certification, and/or evaluation results for cybersecurity-related practices.”

This item refers to general cybersecurity-related practices, not specifically supply chain practices. The most widely used cybersecurity framework in the US now is the NIST Cyber Security Framework (CSF). The CSF is something like a security maturity model. It delineates three tiers of cyber practice. The organization needs initially to determine into which tier it falls, and then develop a plan to achieve the next tier. To establish conformance with this requirement, the manufacturer could provide a copy of the initial assessment and the tier to which they were assigned, as well as their documented plans to move to the next tier.

Item 1.f.i. (page 12): (The documentation must address) Steps taken during development to ensure the IoT product and its product components are free of any known, exploitable[ii] vulnerabilities.”

This requirement seems to indicate that an IoT product (or any standalone software product) can never be secure unless all “known, exploitable” vulnerabilities have been patched. In fact, there is general agreement in the software community that having some number of unpatched low-severity vulnerabilities does not in itself pose much if any risk to software products or intelligent devices. Therefore, requiring suppliers to patch every vulnerability of any severity level is likely to lead to a mis-expenditure of resources, which will ultimately result in higher prices to consumers.

A better way to word this requirement would have been, “Steps taken during development to ensure the IoT product and its product components are free of any serious, exploitable vulnerabilities.”

Item 1.f.ii (page 12): (The documentation must address) The process of working with component suppliers and third-party vendors to ensure the security of the IoT product and its product components is maintained for the duration of its supported lifecycle.”

This requirement is much more understandable when it is restated using the active, not passive, voice: “The device manufacturer needs to implement and maintain a program to work with component suppliers and third-party vendors to ensure the security of the IoT product and its product components.”

New vulnerabilities are identified all the time by security researchers and sometimes by hackers. Thus, no matter how rigorously the supplier followed secure software development practices, new vulnerabilities can be identified in any product at any time. The manufacturer needs to acknowledge this fact by implementing and maintaining a vulnerability management program, which includes identification of new vulnerabilities in software or firmware installed in their product, and providing patches for any of those vulnerabilities deemed to be serious.[iii]

Item 1.f.iii (page 12): (The documentation must address) Any post end-of-support considerations, such as the discovery of a vulnerability that would significantly impact the security, privacy, or safety of customers who continue to use the IoT product and its product components.”

The previous requirement addresses the issue of vulnerabilities that occur during the period that the IoT product is under support by the manufacturer, meaning the manufacturer takes responsibility for patching significant vulnerabilities. However, that responsibility ends when the product is no longer under support. This is a normal practice in the software and intelligent device worlds: The manufacturer makes it clear that, as of a certain date, they will no longer provide updates or patches for a particular version of a software product or device. After that date, customers who do not upgrade to a supported version (or move to a competitor’s product), yet continue to use the product, will run the risk that a serious new vulnerability will be discovered in the product, for which the manufacturer will not provide a patch. As long as the manufacturer provides sufficient advance notification[iv] of the end of support, their customers cannot claim they are being treated unfairly.

This requirement takes into consideration the fact that, no matter how much advance notice a manufacturer provides, there will always be some customers who will not upgrade. What will happen if a serious vulnerability is discovered after the end of support? Often, even if one of the “holdout” customers now wants to upgrade, they will run the risk of suffering a serious breach before they can complete the upgrade. While the manufacturer is free to adopt any policy they wish in this situation, they might decide to develop a patch for the vulnerability and offer it to the holdout customers for a fee. They also may offer a post-end-of-life support contract, which usually carries a hefty fee.[v]

1.g.iv. (page 13): (The documentation must address) Policy for disclosing reported vulnerabilities.”

Disclosing reported vulnerabilities is always a difficult question to discuss in a policy context. Of course, if the manufacturer has developed a patch for the vulnerability, there is no question they should disclose the presence of the vulnerability in their product, although this should be done at the same time that they announce the availability of the patch. By the same token, there is not much question that the manufacturer should not disclose the presence of the vulnerability if they expect to have the patch available in a short amount of time, perhaps one day.

However, the discussion becomes more difficult if the manufacturer has not yet developed a patch for a serious vulnerability that they know is present in their product. The policy will depend on the answers to several questions:

1.      Has the vulnerability been publicly disclosed and is there general awareness of it – at least in the hacker community?

2.      Has the vulnerability begun to be actively exploited in the general community?

3.      Has it been revealed that the vulnerability is present in the product in question, or have active exploits of the vulnerability in that product already begun?

If the answer to question 1 is No, it would not be a good idea for the manufacturer to reveal that their product is vulnerable. If the answer to question 1 is Yes but the answer to question 2 is No, it would also probably not be a good idea for the manufacturer to reveal that their product is vulnerable (of course, in both of these cases, the manufacturer should expedite development of the patch as quickly as possible).

If the answers to questions 1 and 2 are Yes but the answer to 3 is no, the answer to the question whether the supplier should announce the vulnerability in their product is not clear-cut. The manufacturer needs to determine a) how long it will take them to develop the patch, and b) how quickly exploits of the vulnerability are spreading. The more imminent the availability of the patch, the more the manufacturer should lean toward not revealing the vulnerability.

However, if exploits are spreading very rapidly and it is possible that one or more of the devices will be attacked before the patch is available, the manufacturer should probably announce that the device is vulnerable and recommend a mitigation, including perhaps removing the device from customer networks pending the patch being available.

If the answers to all three questions are Yes, there is not much question that the manufacturer should announce that their device is vulnerable, and – unless there is an effective alternative mitigation already available – suggest that customers remove the device from their networks, pending availability of the patch.

Product education and awareness (page 16)

This section addresses a deficiency in documentation for many software products and IoT devices: The documentation describes the cybersecurity capabilities of the device but says nothing about why those capabilities should be used and how the customer might decide the proper policies and settings.

The section makes clear that the IoT device manufacturer should do more than just recite technical details about their product in their documentation, especially when it comes to security capabilities. They need to discuss the following:

1.      Why the customer might want to change the default configuration settings.[vi]

2.      The different options for access control (including password settings), and considerations for deciding appropriate policies.

3.      Securely managing data on the device and its components (especially the cloud component).

4.      Maintaining security of the device and its components, including after support from the manufacturer has ended.

5.      Managing vulnerabilities in the device and its components.



[i] Section 4, subsections (s) and (t)

[ii] The word “exploitable” is important. As IoT device manufacturers start to issue software bills of materials (SBOMs) to their customers regularly, the customers are likely to be alarmed to discover many vulnerabilities, when they look up the components (dependencies) listed in an SBOM in a vulnerability database. However, probably over 95% of vulnerabilities in software components do not pose a risk to the user of the device, because an attacker would never be able to “exploit” them to penetrate the device; this is often because of how the component was incorporated into the product by the developer. In the near future, it is hoped that a “VEX” (Vulnerability Exploitability Exchange) API will be available to inform users that particular vulnerabilities found in software components are not exploitable in the product itself.

[iii] The definition of “serious” is up to the individual supplier and should be based on their customers’ requirements. For example, if an IoT device is mainly used in high-security military applications, almost any exploitable vulnerability may be considered serious enough to warrant patching. However, as already mentioned, in most IoT use cases, it is not necessary to patch every vulnerability – just those that have a high likelihood of occurring, those that would have a serious impact if they did occur, or both.

[iv] What constitutes “sufficient notification” will vary by the type of product and other circumstances. Generally, notification should be provided at least a year in advance, and sometimes longer.

[v] Sometimes, a manufacturer will consider a new vulnerability to be so serious, that they will make an exception and “backport” a patch for the out-of-support versions of their product. Of course, a customer should never count on their doing this.

[vi] Of course, the manufacturer should never reveal the default settings themselves.

Saturday, May 13, 2023

From the NVD to the IVD


In Dale Peterson’s weekly newsletter yesterday (which you should subscribe to, if you don’t now), he linked to my most recent post on the National Vulnerability Database (NVD) – although I think he may have wanted to link to the previous NVD post, which discussed his concern. He said he was “…disappointed when Tom Alrich wrote NIST gave lack of funding as a reason why they can’t improve the NVD; he further wrote that the NVD is “…a key part of extracting value from what would be a big investment in collecting and managing SBOMs.”

To address the second part of Dale’s sentence first, I completely agree with his implication: SBOMs will never become widely used (although just being narrowly used would be an improvement over where SBOMs are now among non-developers) by organizations whose primary business isn’t developing software, until issues with the NVD are addressed. Software developers are already producing and using SBOMs very heavily, but this is almost entirely for their own product risk management purposes. SBOMs are hardly being distributed to any non-developer organizations at all, and they’re being actively used by even fewer such organizations. Thus, fixing problems with the NVD, especially regarding CPE names, is without doubt one of the two “showstopper” problems that hold a filibuster-proof veto over SBOM distribution and use (the other problem is debilitating confusion over VEX).

However, regarding the first part of Dale’s sentence, I don’t agree that the leaders of the NIST team that runs the NVD (which the SBOM Forum met with two weeks ago) said they “can’t improve” the NVD due to “lack of funding”. They’re always trying to improve the NVD, although one can argue that some of their current efforts could achieve better results with more forethought, and especially with more dialogue with NVD users. Some of the most important changes we’re proposing for the NVD – those having to do with CPE names and the need to supplement them with purl identifiers – can be achieved in principle with very little expenditure of money, or even time. See this document for a description of our proposal regarding software naming in the NVD.

It shouldn’t be a surprise to anyone that all organizations are constrained by the funds they have available to them; I doubt any organization has every accomplished everything they wanted to accomplish, had more funds been available. NIST is no exception to that rule, although they’re especially constrained now, since they haven’t even received all the money that was allocated to them in the Omnibus spending act that passed Congress at the end of last year. In 2022, the same thing happened and the NVD wasn’t fully funded before July. Welcome to Washington, DC.

I – and probably everybody else from the SBOM Forum who was in the Zoom meeting with NIST – was fully expecting them to point out that they were constrained by available funds. That was just a polite way of saying, “If you want to propose some grand projects for us that are going to take more funds than what we’re supposed to have now, don’t even think about it, unless you have a good idea where the money will come from. If we just got each year’s funds at the beginning of the year rather than in the middle, that alone would be a cause for great rejoicing.”

When the SBOM Forum started discussing the NVD a year ago, we just focused on problems, and what specific steps need to be taken to fix those problems. However, in the last few weeks we’ve moved on to think about these facts:

1.      The NVD is already by far the most heavily used software vulnerability database in the world;

2.      The needs the NVD addresses are rapidly expanding for many reasons, not the least of which is the great expansion of SBOM use by software developers and the even greater expansion that will likely occur once the NVD’s problems regarding CPE names can be addressed; and

3.      Even though some countries are considering building their own vulnerability databases, at least partially modeled on the NVD, it makes literally zero sense for them to do this. Vulnerabilities are universal and they don’t care about country borders. It’s much better to have one excellent international vulnerability database (IVD – remember, you saw that acronym here first) than ten mediocre country-specific databases (i.e. mini-NVDs).

On the other hand, it makes all the sense in the world that different countries would join together to build a common vulnerability database that transcends what any one country could develop on its own. That’s what we need to focus on.

The problems can all be solved fairly easily. As Dave Wheeler of the Linux Foundation points out repeatedly in our meetings, the NVD doesn’t have any problems that haven’t already been solved many times in other contexts. The real question is, “What could the NVD be if it were a truly international database, focused on serving the needs of the software development and security communities worldwide?” If we answer that question, funding won’t be an issue. Governments and private organizations worldwide will stand in line to help fund a really world-class solution.

As the old Civil Rights movement song says, “Keep your eyes on the prize. Hold on.”

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Thursday, May 11, 2023

The Procurement use case for SBOMs

There’s lots of discussion of use cases for SBOMs, and one that always comes up is Procurement. At first it seems obvious that, when your organization is evaluating software or intelligent device suppliers for a procurement (whether you’re comparing multiple suppliers or simply deciding whether the supplier you’re already using merits retention), it would be great to get an SBOM (or even better a set of SBOMs from the past year or so) from each supplier. That way, you can find out what risks apply to the product, especially open vulnerabilities. This will certainly help you decide which supplier you prefer, or whether your current supplier should be retained.

Unfortunately, as happens often when you look at how to use SBOMs, the best-laid plans quickly run aground on data problems. One example is the naming problem, which is in my opinion is one of the two “showstopper” problems that is preventing SBOMs from being widely (or even narrowly, to be honest) distributed and used by organizations whose primary business isn’t developing software.

I’ve asked several major developers what percentage of component names in an SBOM – that was generated by an automated process without manual enhancement – can be found in the NVD; of course, if you can’t learn about vulnerabilities that apply to the components in an SBOM because you can’t find them in a vulnerability database, then the SBOM doesn’t do you a lot of good. One of those developers initially said the number was 20%, but when another expressed skepticism about that number, they agreed it’s actually…under 5%.

In other words, if an automatically-generated (usually as part of the software build process) SBOM lists 150 components (about the average number, according to Sonatype), you will be able to find just eight or fewer of those in the NVD, on average, without taking some extraordinary (and decidedly “manual”) steps to increase the percentage.

Of course, suppliers, consulting companies, etc. all have adopted various measures – comparing with other databases, doing manual searchers, looking through GitHub commits, fervent prayer, etc. – to increase the percentage, so that the SBOMs will be usable for their work. But will the supplier that you’re evaluating take all those steps before they provide you the SBOM (assuming they even know what steps to take)?

And suppose the supplier has something to hide…Wouldn’t it be easy to leave the useless names in the SBOM they provide for your evaluation even if they have better ones, so that you won’t be able to learn about the many vulnerabilities that apply to the components in their product?

Another problem: almost all vulnerabilities listed for a product in the NVD are reported by the supplier itself. What if the supplier you’re evaluating happens to be one that has never reported a single vulnerability? Tom Pace, of NetRise,  pointed out to the SBOM Forum about a year ago that he found one supplier of network devices used in critical infrastructure environments, and even the US military, that had never once reported a vulnerability to the NVD for any of the devices it makes (they make about 50 devices).

Tom looked at the firmware in those devices, and from identifying components in the firmware and knowing what vulnerabilities were present in those components, he decided that, in just one of those devices, the number of open vulnerabilities was a little more than the zero that had been reported. In fact, the number was 40,000, which I believe is 40,000 more than zero (if my math serves me correctly).

The worst part of the story, though, is that if a user that was evaluating the supplier looked up that device in the NVD, they would get a message that said, “There are 0 matching records.” This would also be the message the user would receive if the supplier diligently reported vulnerabilities for their products, but this product just didn’t have any. In other words, the message that indicates they haven’t reported any of the 40,000 vulnerabilities they have is the same message you would see if the product didn’t have any vulnerabilities at all.

Suppose you were someone who didn’t know how the NVD works (or doesn’t work, in this case), and you had been told to find the product (among the set of products being evaluated) with the smallest number of vulnerabilities. Suppose that the other suppliers had one or two vulnerabilities, but this one didn’t seem to have any at all. Would you be suspicious and investigate all 49 of that supplier’s other products, to find out if they had any vulnerabilities reported for them? And if you didn’t find any, would you call up the supplier to find out why they weren’t reporting vulnerabilities at all?

Of course not. You wouldn’t even think of doing that. Instead, you’re much more likely to circle the supplier with (supposedly) zero vulnerabilities on the list you hand back to your boss, and say, “Here’s the company you should buy from.” And then you’ll go home, so you’re not late for your daughter’s soccer game.

In the post I linked above, I said:

Steve Springett, who I’ve written about a number of times and who is tasked with helping 2,000 coders produce secure software in his day job, said last week that his company deliberately favors products for which there are a lot of reported vulnerabilities. They consider this a sign that the supplier is diligently seeking out vulnerabilities, not waiting for their product to be hacked.

Thus, in the hypothetical case I just described of the person who was anxious to select a supplier so they wouldn’t be late for a soccer game, that organization would literally have been better off if they’d told that person to identify the supplier with the most reported vulnerabilities in the NVD, not the least.

In the above case, the hypothetical evaluator selected a product that appeared to be vulnerability-free, but which was in fact anything but that. However, the opposite case also poses danger. Suppose you are assigned to evaluate suppliers of a big, complicated system like an electronic health record system for a hospital or an energy management system for an electric utility. Systems like this were typically first developed decades ago, and they have been added to and improved over the years.

But, as I described in this recent post, an SBOM will usually make systems like this appear to be riddled with vulnerable components, when in fact they are very well maintained and likely to be just as secure as other products that don’t have all of these “vulnerable” components. The problem is that these systems contain ancient components that can’t be removed without bringing the whole product crashing down, even though the components are well protected by the practice of “backporting” patches. Yet a standard SBOM analysis will seem to indicate that those ancient components, which are usually unsupported by their suppliers and therefore appear to have lots of vulnerabilities, are clear evidence that the supplier of the product doesn’t care one bit about security.

Cases like this show that it’s always a big mistake to evaluate a supplier’s security just based on technical details that could be misleading, including details regarding SBOMs. In fact, you should always require that a supplier you’re evaluating fill out a questionnaire that will elicit information about their actual practices regarding software security (and get some references on whether the supplier actually follows those practices). Then, if a technical analysis seems to indicate a big problem (like the one just described), but the questionnaire answers indicate that the supplier has a good vulnerability management program, you might not just dismiss them because of the technical issue.

Someday, a lot of the problems that currently prevent SBOMs from providing a complete picture of a supplier’s security practices will be solved. You will still need to rely on questionnaires, but you’ll also be able to trust the information that an SBOM provides you, to provide a good practical evaluation of those practices. However, that isn’t the case today.

In fact, I can think of just one item, related to an SBOM, that will provide unequivocal evidence of the supplier’s security understanding and practices: You can ask them for an SBOM. If they ask you what that is, you should dig deeper to find out how much they really know about software security. However, that’s about all you can do at the moment.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Saturday, May 6, 2023

The NVD train is moving. Time to get on board!

 

Last Friday, I described in a post the meeting that the SBOM Forum had that day with the team at NIST that runs the National Vulnerability Database (NVD). The most interesting development in that meeting, from my perspective, was that not only were the NIST people interested in making improvements to the NVD like those we’re requesting, but they suggested to us that we investigate forming a public-private partnership of some sort to help implement these improvements – since they admitted they can’t consider any suggestions now that would require additional funding on their part.

We honestly hadn’t been thinking in those terms, but the idea of partnership makes a lot of sense. The fact is that the NVD’s shortcomings are costing the worldwide software industry – both from the software developer side and the software user side – many millions of dollars every year, if not every day. Moreover, they’re seriously inhibiting distribution to, and use of, SBOMs by organizations whose primary business isn’t developing software (which I call “end user organizations”). Once we have determined what we want to accomplish as well as what is feasible, we shouldn’t have a lot of trouble marshalling whatever resources are required for that.

One area we’re starting to discuss is especially interesting. A UK developer named Anthony Harrison, who is an active member of the SBOM Forum, recently pointed out some important facts:

1.      The NVD is by far the most widely used vulnerability database worldwide.

2.      Currently, even though there is heavy use of the NVD in Europe and Japan (and growing use in other parts of the world), every bit exchanged between a user in say Germany and the servers that house the NVD (based in the DC area) must travel over the Atlantic Ocean. Performance and reliability could be greatly improved in Europe and Asia by implementing some local presence such as a content delivery network (although there are other technologies that will achieve that same purpose – this problem has been solved many times before, for much larger databases).

3.      Because their citizens are increasingly using the NVD and noticing the performance problems, governments are feeling pressured to implement their own vulnerability databases. The governments of the UK and Japan, as well as others, are already preparing tenders (American translation: RFPs) to create their own national databases.

4.      As SBOMs start to be widely used by end user organizations, the performance problems will only increase. Currently, SBOMs are being heavily used by software developers to identify and manage vulnerabilities in products they’re developing. In fact, just one open source tool is being used over 300 million times a month (or if you will, 10 million times a day) to search for vulnerabilities present in the components in an SBOM – although that use is almost entirely by developers. When end user organizations start using SBOMs in mass, these numbers will seem laughably small.

It would be a literal tragedy if several major governments felt they had to create their own national vulnerability databases, simply because their citizens were telling them that was the only way they’ll be able to get reasonable performance for their vulnerability searches. If anything is universal, it’s software vulnerabilities. The vulnerabilities faced by a software company in Japan are almost the same as those faced by an end user organization in France. There should be no need for multiple countries to have their own national vulnerability databases.

Meanwhile, what would happen if those countries didn’t implement their own databases and instead invested just a fraction of what they would have spent on their own database in improving the NVD? Of course, that will require the NVD to take responsibility for improving the NVD’s performance and reliability worldwide, as well as fixing the many problems that led our group to approach NIST in the first place. My guess – optimistic fellow that I am – is that it would be eminently possible to structure a deal where all the countries concerned would individually invest far less money, yet end up with a single great database that is easily accessible worldwide - versus a number of barely adequate national databases that are constantly falling behind for lack of funding.

What would it take to make the NVD a great database? I’m glad you asked. In last week’s post, I provided a list of NVD improvements we were looking for when we met with NIST. Now, we have expanded that list and made it available in a Google Doc. We invite anyone to make comments and enter changes (they will initially be “suggestions”, pending approval. If we don’t want to include what you suggested, we’ll let you know why). We’ll look forward to seeing your suggestions!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Monday, May 1, 2023

Is this my last post on the supplier liability question? I certainly hope so…


This is my fifth post dealing with a proposal that I find to be appalling: the proposal by Kemba Walden, Acting National Cyber Director, that suppliers be assumed to be liable for software breaches. The two more notable of my four previous posts are here and here. In the previous posts, I’ve provided multiple reasons why this is a terrible idea, but here’s an analogy:

1.      Suppose someone were to propose that, because at a four-way intersection without a traffic light, the driver on the right has the right of way, in case of an accident at such an intersection, liability will normally rest with the driver on the left. Sounds sensible, right?

2.      However, some malcontent might ask, “What if the driver on the right ignores the stop sign and plows into the other driver, who obeyed their stop sign? Will the latter driver still be deemed liable?”

3.      “Of course not,” the proposer might say, “Let’s amend the proposal to say that the driver on the left is liable only in cases where the driver on the right obeys their stop sign.”

4.      The malcontent agrees that’s a good idea, then asks, “What about if the driver on the right stops for their sign, but their judgment is impaired by drugs or alcohol, and they don’t even know there’s another car in the intersection?”

5.      The proposal will then be further amended to say, “The driver on the left is liable in cases where the driver on the right obeys their stop sign and is not driving impaired.”

6.      But what if the driver on the right is sober and obeys the stop sign, but has been carried away by an emotional text message exchange with his soon-to-be-ex-girlfriend and is absorbed with completing his triumphant final text – so he doesn’t even see the driver on the left as he pulls into the intersection after stopping for his sign?

7.      Of course, that will require a further change to our rule. Moreover, I’m sure you can think of at least five or ten more changes that would be needed, without breaking a sweat. I certainly could.

Of course, this will quickly become a very complicated rule. And, even if the driver on the left in an accident is dead sure that the driver on the right met all ten (or whatever the number is) conditions, it’s very likely that the driver on the right isn’t going to simply agree with them. Will that driver be bound and gagged if they try to assert that there’s another condition that makes them not liable – say, the driver on the left was clearly inebriated and didn’t even try to avoid an accident (remember, failure to avoid an accident is a violation in most states, not just causing one)?

Actually, in the United States (as well as most other civilized countries, I would hope) the driver on the right doesn’t have to simply cave on this issue; they can contest the assertion that they’re liable. This is thanks to a recent innovation in societal governance: It’s called a trial before a judge or jury. Under this recent innovation (where, by “recent” I mean something that has come into being in the last thousand years, since its invention in medieval England), neither side in a dispute is assumed to be liable until a judge and/or jury can hear what they have to say (including any evidence they want to present) and make their decision.

I suggest that, even though this is obviously a very old idea and most likely wasn’t originally intended to apply to the concept of software breaches, it’s probably worth retaining – rather than instead asserting that liability for software breaches always rests with the software supplier, except in exceptional cases to be determined by someone who works in the White House.

Fortunately, I’m pleased to report that it seems some rationality has crept stealthily into the national conversation. See the opening paragraph of this article in Nextgov (which BTW I think is a very good newsletter to subscribe to, both for insights on the federal government and on cybersecurity in general):

Biden administration officials are pushing to make technology manufacturers liable for the security of their products, but the currently divided Congress may stretch out the timeline for instituting non-voluntary solutions. In the interim, some lawmakers, experts and industry leaders have proposed the issuance of cybersecurity investment tax credits to help firms adopt enhanced cyber standards on their own. 

My reactions to this are:

1. It’s stupid to blame the fact that fundamental changes to the US legal system aren’t going to go anywhere on the fact that Congress is “currently divided”. I would hope that even a non-divided Congress would realize how damaging it would be to institute a liability rule like this. There’s almost no aspect of human behavior that would be untouched if this were to happen, for example, “Ma’am, it doesn’t matter that you assaulted this man because he had threatened your child and was walking toward her. You were the one that initiated the assault, which means you’re liable”, or “Sir, I sympathize with the fact that you entrusted your life savings to someone who blew them away during one night at the casino. But you signed a document saying you understood that all investing carries risk, and that means the man who took your savings is not liable for your problem.”

2. The second sentence of the article lays out a two-part solution to the issue of securing software (and that is the issue here, right? It’s not that we’re out to punish software suppliers with every penalty up to imprisonment or death, just because we feel like punishment is good for the soul? Frankly, it seems that punishment is really the end goal of this proposal, with secure software just a nice-to-have side effect).

Note that the first part of the solution is “enhanced cyber standards”. I’m fine with that, although I’ll point out that there’s no agency of the federal government, other than the military, the intelligence agencies, the Nuclear Regulatory Commission, or the FDA (and then just for medical devices), that currently has any power to regulate software or intelligent device suppliers, other than for safety concerns. Maybe there will be an agency like this in five years, but I’m sure it won’t be any earlier than that (and anyone who thinks this agency is needed should start advocating for it now, since I have never even heard of a proposal for such an agency).

The second part of the solution is “cybersecurity investment tax credits”. I’m all for these, although they should apply both to software suppliers and users. In other words, I think both suppliers and users are currently underinvesting in software cybersecurity protections, since the risks have escalated rapidly in recent years (for example, think of how two developments, the SolarWinds attacks and widespread losses due to ransomware attacks, have significantly increased cyber risk for all organizations, both big and small – but disproportionately for the small ones).

Rather than trying to “solve” this problem by bankrupting the organization that we have decided is liable for a breach, how about providing them with a positive incentive to follow guidelines like NIST’s SSDF or the NIST Cyber Security Framework? Sure, we’ll have to forgo some tax revenue from those organizations and that will cost the Treasury some money. But we all bear the costs of cybersecurity breaches, and the full costs of any breach (to all parties affected) are almost never recovered in a court of law or from an insurance policy.

And there’s another reason why targeted tax credits are going to be much more effective than trying to change the US legal system in order to punish software suppliers for…you know…having the temerity to develop software. This is something that can actually happen within the next few years, rather than being a feel-good proposal that has literally zero chance of enactment. And that should count for something.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.