Thursday, June 29, 2023

CycloneDX 1.5 arrives!

June 26 was a watershed day for the “SBOM industry” (if I may be so bold as to declare this an industry). On that day, two important things happened, both of which I discuss in this new post on FOSSA’s blog:

1.      CycloneDX 1.5 was released.

2.      The CycloneDX Authoritative Guide to SBOM was published.

Which of these do I think is more significant? While CDX 1.5 represents a solid advance for what was already an excellent SBOM standard, the Guide (which I never even heard was in development, since – sniff! – Steve Springett never mentioned it to me) is simply the single best document on SBOMs that I’ve ever read.

I’ll point out that this isn’t a technical guide to CDX 1.5 (here’s the technical guide) and doesn’t even mention anything about v1.5, v1.4, etc. Instead, it introduces SBOMs, their use cases, and important features. Of course, all the examples in the Guide are from CycloneDX and the topics were clearly developed with CDX in mind; this just shows that the people that wrote the Guide, the CDX development team, ain’t no fools. But SPDX users will find the Guide very useful as well, so I recommend it to everyone.

What I especially liked in the Guide were the three items I discussed at the end of the post, under the heading “Three Important SBOM Problems and Their Solutions”. These are three of the hardest questions regarding SBOMs (the SBOM Forum just spent two meetings discussing only the first of these, and we never reached a conclusion). I’ve been wondering about all of them since the NTIA days, and none of the NTIA or CISA workgroups has ever seriously discussed them, let alone looked for a solution.

The Guide shows how each of these problems can be solved – yes, I’m saying “solved”, not “mitigated” - in CycloneDX (I assume they can be solved in SPDX as well, since the problems are certainly not specific to CDX). I planned to start discussing all three of them in blog posts, although I wasn’t sure I’d find an answer. Now the Guide’s published the answers. It seems they’re trying to put me out of business…

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Monday, June 26, 2023

Dale Peterson made me miss dinner again


Just a few hours ago, I got an email letting me know that Dale Peterson had mentioned me in a comment on LinkedIn. Since I always find Dale’s comments on my posts – both positive and negative – to be a great learning experience, I went to see what he had said. And when I saw he was commenting on a topic that I was planning to write about today anyway (but was thinking of putting off ‘til tomorrow), that was it – I had to write it now. Once I have this post up, I’ll find something to eat for dinner.

Dale had posted a video of a presentation by Lindsey Cerkovnik of CISA that mentioned the naming problem – something I’ve written about a few times and will write about much more in the near future. In his post, Dale said that one of the biggest challenges to SBOMs is “naming (well, actual identity) of software and software components. Then Lindsey talks about three possible solutions, CPE, SWID Tags, and Purls, and the challenges using each of these. Where is the global identifier solution?” I couldn’t agree more with Dale’s statement about naming being one of the biggest challenges to SBOMs. In fact, I’d rank it the second most serious of the three showstopper problems. Of course, there are other problems that aren’t showstoppers, but are still serious).

As always happens with Dale’s LinkedIn posts, there were a number of comments. He added one himself, which read:

I believe Tom Alrich wrote recently that NIST claimed to not have the funding or project for this mission….I keep hearing that identity is the big technical issue that isn't solved. Was hoping to get a solution in Lindsey's session. Maybe I will get some great ideas in and on stage at S4x24.

I’ll say it right off: the global identifier solution is purl. This is evidenced by the fact that, as of today, it’s hard to find a single vulnerability database (or other type of software “database” like Google GUAC) that is not based on purl. The big exception to this? The NVD, which is based on CPE, an identifier that was developed by NIST sometime in the 00’s (long before purl appeared, of course). The big difference between CPE and purl is that the former requires a centralized database and requires CPE names to be assigned by trained staff.

Purl, on the other hand, doesn’t require any centralized database at all and doesn’t require any trained staff. In fact, there aren’t any purl “staff” that I know of. Purl is an OWASP open source project, like CycloneDX. In fact, Steve Springett, the leader of the CDX project, is a maintainer of purl. It’s funny how Steve seems to have his hand in some of the most important developments in software supply chain security – for example, the fact that he developed Dependency-Track, to read software BOMs and look up vulnerabilities for components, a little more than ten years ago. This was at least 3-4 years before the term “SBOM” started to be used.

Of course, CDX can be used to produce many types of BOMs, the latest being MLBOM (machine learning BOM), which debuted as part of CycloneDX 1.5 today (literally!). And Dependency-Track is going stronger than ever, being used over ten million times a day to look up a component from an SBOM in a vulnerability database.

I don’t doubt that in 2011, CPE was the state of the art for software naming. And the main challenge to CPE - open source repositories that would proliferate like rabbits in the coming decade or two - was still barely on the horizon. However, CPE hasn’t worn well in recent years, given the open source challenge. So, when Philippe Ombredanne conceived of purl, its advantages quickly became obvious. Rather than try to repeat them here, you can read the post on purl I referenced just above.

You can also read the paper that the SBOM Forum wrote on the naming problem in the NVD last September (the NVD is just one source of the naming problem, but given that it’s now the most widely used vulnerability database in the world and that it uses a very problematic identifier – see pages 4-6 of our paper – makes it by far the biggest locus of the problem).

And that paper leads me to the real topic of this post (I usually get around to the topic at some point in the post): the fact that the SBOM Forum held our second meeting with the NIST NVD team last Friday. In our first meeting, the idea of a public-private partnership to enhance and update the NVD came up. This new meeting was to discuss what the NVD team has learned internally in NIST about PPPs, and their ideas of how such a partnership will be structured.

The NIST team is going to release a video about this within a couple of weeks (Note from Tom 7/23: It seems they're late releasing the video, and the SBOM Forum is now guessing the video won't be out until early August. In other words, the whole NVD Consortium effort - which is the proposed name of the public/private partnership that NIST has in mind for the NVD - has already been set back by a month. I'll let you know when there are more developments in this story), so I won’t try to steal their thunder. But I will say they’re very much interested in working with private industry partners from all over to fix problems in the NVD and make it a much more robust database. The video will outline their general ideas for the program and ask for any comments or questions (purely informal). They will then mull over the comments they get and post concrete plans in the Federal Register at the end of the summer. And knowing that things in the federal government take time (I was astounded to hear this, of course. I’ve always thought of the federal government as a nimble, agile dancer, able to pivot on a dime and go in a completely different direction 😊), they say they’ll be happy if the program is running by the end of the year.

So, look for the video. When it comes out, I’ll of course post a link and write about what they say in this blog. I will point out, though, that they have been reading our paper, which I referenced above. They have decided they will implement purl in the NVD, although there are several other important moving parts that have to be put in place as well. Fortunately, the biggest of those – the CVE 5.1 JSON spec – is underway now, I believe (CVE is operated by CVE.org, an independent board funded by CISA, which is of course part of DHS. The NVD is part of NIST, which is in the Dept. of Commerce. All of the NVD’s current funding comes from DoC, through NIST. Like most people, I always thought CVE was part of NVD, but that’s far from being the case).

So if you’re in private industry (say, a software developer or tool vendor) and you’ve been complaining about the NVD for a long time, you’ll now have a chance to contribute to the solution – or at least the mitigation – of the NVD’s problems. But you also need to keep in mind that there’s a much bigger goal that needs to be kept in mind, even though it’s not attainable at all in the near future.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Friday, June 23, 2023

What are the new due dates for software supplier attestations?


There has been confusion regarding the postponement of the date the software security attestations by suppliers are due under Executive Order 14028, as interpreted by OMB. The attestations were originally required for this month in last fall's OMB memo M-22-18. Most software developers that sell to the federal government will need to fill out these attestations. The recent OMB memo M-23-16 provides a new timeline for the attestations being due. Unfortunately, the new memo isn’t exactly a model of clarity, to say the least. The relevant paragraph is:

This memorandum modifies the deadlines by which agencies must collect attestation letters. Agencies must collect attestations for critical software subject to the requirements of M-22-18 and this memorandum no later than three months after the M-22-18 attestation common form released by the Cybersecurity and Infrastructure Security Agency (CISA) (hereinafter “common form”) is approved by OMB under the Paperwork Reduction Act (PRA). Six months after the common form’s PRA approval by OMB, agencies must collect attestations for all software subject to the requirements delineated in M-22-18, as amended by this memorandum. 

From this, I’ve derived the following rough timeline: 

  1. CISA releases their approved version of the attestation form. The comment period for the original form won’t close for a week or two. Then it will probably  take at least 3 months before the CISA technical staff approves a revised version. Given that this form is likely to be very controversial, with a lot of pressure put on CISA from software suppliers and device manufacturers, this might well be an underestimate.
  2. CISA lawyers approve the form. I strongly doubt CISA will be able to release the form without the lawyers’ approval (heck, I wouldn’t be surprised if the CISA lawyers have to approve every change to the lunchroom menu). From what I’ve seen so far about getting those lawyers’ approval for SBOM and VEX documents, it will take them about 2 months to approve the form. Thus, I'm guessing CISA will take 5 months at a minimum to develop and approve the new attestation form.
  3. OMB reviews the form and approves it under the Paperwork Reduction Act. I initially listed this as a minimum of 2 months, but someone in the government with experience in this area said the OMB PRA reviews usually take at least 4 months. So we’re now at 9 months minimum.
  4. Agencies collect attestations from suppliers of “critical software”. This occurs 3 months[i] after OMB approval, so now we're at 12 months minimum.
  5. Agencies collect attestations for all software which is “subject to the requirements delineated in M-22-18…”  This is 6 months after OMB’s PRA approval date (and 3 months after the deadline for critical software), meaning that the deadline for attestations for other software will be 15 months from now, at a minimum.

This is quite a long time. What could be done to shorten it? I can't comment on all of the other numbers besides the number for CISA's development of the new attestation form. Unfortunately for CISA, being told to develop an attestation form for the NIST Secure Software Development Framework (SSDF) is something like being told to develop a perpetual motion machine, and a procedure for squaring the circle for good measure. The SSDF – like everything else that NIST puts out - was developed as a risk management framework and certainly not a compliance framework.

Thus, the SSDF includes no information on what would constitute "compliance" with any of its provisions, or what criteria might determine whether a provision applies to a supplier at all (since, when a risk management framework like SSDF is developed, the developers of the framework assume the organizations that follow it won’t be compelled to address provisions they believe don’t apply to them. That’s how risk management works). This and other questions will all need to be answered by CISA before the form is ready.

I'm sure CISA will be pressured to put in measurable compliance parameters, since only by doing so will federal agencies be enabled to determine whether or not a supplier has provided a valid attestation. However, each of these parameters is certain to be quite controversial. For example, consider the provision “Separating and protecting each environment involved in developing and building software” in the current form. If that were to survive in the new form, CISA would need to:

1.      Define “environment”. Presumably, it doesn’t refer to whether the devices that build the software are located in the desert vs. a big city (although that could certainly be part of the calculation). More importantly, what constitutes the border of this environment? Unless that’s clearly defined, the software developer might have to provide these protections to every device that’s on any of its networks, even if they’re properly segmented from each other.

2.      Define what measures constitute “Separating…each environment”. Does that mean just separating it from other networks? Does it mean the development network needs to be air-gapped from the rest of the world? And if that’s too drastic, what is at least an “adequate” level of separation?

3.      Define what measures constitute “protecting each environment”. Obviously, a single network firewall provides a good deal of protection, which is why just about every network on the planet has one today. Is that adequate protection? And if it is adequate, how does it have to be configured? If every rule reads “any/any”, is that enough? Certainly not! And if the single firewall isn’t enough protection, what is?

Those are just the first three questions that come to mind. I’m sure that, as they get answered, that will raise other questions, e.g. whether the internet routers are protected against DNS attacks. Some of these might be irrelevant, but the CISA staff won’t want to take any chances with this form, since if they leave out an important risk, they’ll be criticized for not being "tough enough" on suppliers (of course, some suppliers will always say they’re being too tough, no matter what CISA proposes. There simply isn't any happy medium).

Of course, I’m not recommending that any supplier wait a year before they create their attestation(s). They should all start on it as soon as the form is released by CISA (i.e. in five months, according to my estimate above). OMB isn’t going to make any substantive changes to the form, even if they have objections – they’ll send it back to CISA for remediation. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Of course, the text says “no later than” three months, but I highly doubt the agencies are all going to start demanding attestations from their suppliers before that deadline, since the suppliers are likely to just laugh and point out that they have three months, and say they need every minute of it – which will probably be true.

Wednesday, June 21, 2023

Why don’t we just have fun for a year or so?

The CISA SBOM-a-rama last week was quite good; I attended it virtually. The meeting started with presentations by three people leading SBOM programs in their industries: Jon Meadows of Citi (financial), Jennings Aske of NY Presbyterian Hospital (healthcare), and Charlie Hart of Hitachi Automotive (autos). They all discussed their industry’s experience with SBOMs and expressed a lot of confidence that SBOMs will help make their software more secure, given some time (speaking of time, there was general agreement that all three of them should have been given more of it for their presentations. I’m sure they – and maybe others – will have more time in the next SBOM-a-rama, which might be this fall).

However, there was something else that all three of them had in common: none of them pointed to any actual use of SBOMs in their industry. And even though all three of them are in leadership positions for proofs of concept in their industry, none of the proofs of concept is currently actively exchanging SBOMs (the healthcare PoC started exchanging SBOMs in 2018. There was a second healthcare PoC in 2020 and 2021. The third PoC started this year and is on a temporary hiatus but promises to return).

So, what was different about the three presenters? Both Jon and Jennings had a positive attitude, yet were clearly disappointed that they are encountering serious problems in their PoCs. In Jon’s case, the problems have to do with the “quality” of the SBOMs they’ve received as well as the naming problem, which is serious but may be on the road to at least a partial solution. In Jennings’ case, the problems are uncertainty about how to approach VEX, and issues regarding how to integrate SBOM and VEX data with existing asset and vulnerability management systems.

But Charlie was very different. He was quite upbeat, and he was – dare I say it? – clearly having fun. The autos PoC, which he runs, isn’t exchanging SBOMs between suppliers and customers any more than the other two are. However, the industry as a whole (or at least a significant percentage of the companies) is conducting tabletop exercises, where representatives from automotive suppliers and OEMs get together in one place, throw a bunch of SBOMs on the table (which don’t relate directly to products made by any of the suppliers, thus avoiding any legal issues) and discuss them – why they’re good, why they’re bad, etc. If a supplier doesn’t get something right – and it’s almost guaranteed that something won’t be right – everyone can learn from their mistake, and nobody has to feel bad about it.

What allows this to happen? I haven’t talked with Charlie about this specifically, but my guess is the Autos PoC had both suppliers and end users sign an NDA that says something like (in legal language, of course), “We’re entering into this exercise with the clear understanding that we’re all learning about something new. There are still many unknowns about SBOMs and VEX information, as well as the policies and procedures that need to surround them. If we pretend any of this is well-understood practice, we’re simply fooling ourselves. More importantly, we’ll spend all our time trying not to make a mistake and we won’t learn anything. Therefore, we’re going to jointly examine publicly available SBOMs for a variety of products and see what we can learn about creating and utilizing them”.

In listening to Charlie, it occurred to me (although not for the first time) that we ought to apply the approach of the Autos PoC[i] to the whole software community (at least in the US. I’m not competent to decide what other countries should do). We ought to agree (in NDAs between suppliers and their customers. This might be a standard form or a customized one) that we’ll treat SBOMs as a big tabletop exercise, where suppliers promise customers they’ll do their best to produce usable SBOMs (perhaps for legacy products, in order not to reveal information about current products that they may not want revealed, for competitive or security reasons) and the customers promise not only not to sue the suppliers for mistakes, but to provide feedback to them that will help them improve. In a year, suppliers and their customers will decide together how this is going, and whether they want to continue the “exercise” or start moving toward an approach based more on legal guarantees.

What’s the alternative to taking this course? It’s the state we’re in now, where I don’t know of a single software or intelligent device supplier in the US (other than in the military realm, into which I have no visibility) that is providing SBOMs to its customers (who aren’t themselves developers) on a regular basis (which I define as with at least every major new version). At the moment, we’re learning almost nothing about SBOMs, because only experience can settle most of the big questions. Could learning at least something, and having fun while doing so, be all that bad? It would certainly be a change of pace…

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I’m sure both the Financial and Healthcare PoCs have similar NDAs in place. Given the huge uncertainty regarding the details of SBOMs and VEX now, any supplier that didn’t require end users to sign an agreement like what I just described would be asking for trouble.

Thursday, June 15, 2023

A VEX server

 

In my last post, I pointed out that VEX documents can only achieve their purpose of preventing users from wasting massive amounts of their – and their suppliers’ help desk’s – time at a huge cost: Many suppliers will have to produce and distribute VEX documents (to individual IP addresses) in quantities of tens or hundreds of thousands, and in some cases millions, every day. Clearly, VEX documents are not a sustainable solution to the problem VEX was designed to address, namely identification of the small number of component vulnerabilities that are exploitable in the product itself.

My solution to this problem is for the supplier to maintain a “VEX server” containing the following information. Ultimately, the information needs to be maintained for each version of each of the supplier’s products that is in active use by customers:

1.      Vulnerabilities identified, in a major vulnerability database like the NVD, in components included in the product/version.

2.      The vulnerability database in which the vulnerabilities were identified. Normally, the supplier would only need to search one database, unless the customers require another.

3.      The exploitability status of each vulnerability: “affected”, “not affected”, “under investigation” or “fixed”. These status designations have the same meaning as in the two VEX formats.

4.      For every vulnerability with a status of “affected”, there needs to be a text field describing a mitigation for the vulnerability. Normally, the mitigation will be “apply patch XYZ found at URL.com” or “upgrade to the current version”.

5.      For every vulnerability with a status of "not affected”, there should be a machine-readable “status justification” field. If a supplier determines that the available justifications do not apply in the case of a particular vulnerability in this product/version, they may leave the field blank and instead fill in a text field describing why the vulnerability in question is not exploitable in the product. There are nine “justifications” in CycloneDX VEX, and five “status justifications” in the CSAF VEX profile. The latter are a subset of the former, although they have somewhat different descriptions. For a further discussion of status justifications, see this post.

This information needs to be updated in as close to real time as possible. The server needs to be available for 24/7 remote access by the supplier’s customers (the supplier will be able to control the product(s) for which a customer is provided access). The customers will utilize an API – embedded in SBOM “consumption” tools – to access the server.

In other words, instead of having to produce and distribute huge quantities of VEX documents daily, the supplier will simply need to update the server whenever they know of a change in the status of a vulnerability in a particular product/version - for example, when the exploitability status of a vulnerability in the product/version changes from “under investigation” to “not affected”, or when a new vulnerability appears in a vulnerability database for a component included in the product/version.

The VEX server will not necessarily be operated by the supplier. In fact, I believe that, because of the economies of scale to be achieved by centralizing VEX server operations, third parties will arise that will operate cloud based “VEX server farms” as a service to suppliers. The supplier will be responsible for updating VEX information for each version of each of their products that is being used by customers. However, the remaining maintenance, including maintaining access to the server by software customers of multiple suppliers as well as the suppliers themselves, will be the responsibility of the operator of the VEX server farm.

There is another reason why I believe it is likely that third parties will operate VEX server farms for suppliers: End user organizations are likely to require access to VEX information for many products. If an organization utilizes 500 different software products or intelligent devices from different suppliers, they will much prefer to access a small number of server farms operated by a small number of service providers, rather than 500 separate servers operated by 500 different suppliers.

API access will be authenticated. While access policy is at the supplier’s discretion, normally only a current user of a particular version of a product will be able to access the API information for that product and version. An API session will be initiated by a tool that is operated either by the customer or a third-party service provider acting on their behalf. I believe the API will be developed this year by a major SBOM-related open source project team.

The tool will “follow” versions of software products and intelligent devices that are operated by the user organization, and regularly (most likely daily) use the API to download the current complete set of VEX information for each of those products. At any time, the user will be able to retrieve from the tool (or the third party service provider using a similar tool) an up-to-date set of exploitable component vulnerabilities, perhaps formatted for ingestion by a vulnerability or configuration management tool utilized by the user organization. The user will also be able to download lists of non-exploitable (“not affected") vulnerabilities, as well as those whose status is “fixed” or “under investigation”.

In my opinion, the list of exploitable component vulnerabilities is the Holy Grail of SBOM-based vulnerability management (which I call “component-derived vulnerability management”), since this is the set of vulnerabilities that a user should actually be concerned about. This goal will never be achieved if we have to rely solely on VEX documents being distributed.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, June 11, 2023

The problem with VEX documents

 

From the beginning of the NTIA Software Component Transparency (SBOM) Initiative (“the Initiative”) in 2018, some of the private and public sector organizations represented in the workgroup meetings realized there was a serious problem that would need to be addressed before SBOMs could be distributed and used in volume. This problem was due to the fact that the majority of vulnerabilities that have been identified (in the National Vulnerability Database or another vulnerability database) in a component of a product are not exploitable in the product itself. That is, even though an attack on a component, which was based on that vulnerability, would probably succeed, it would not succeed if it were instead targeted at the product itself. For more information on this idea see my post titled “What is the purpose of VEX?”.

To address that problem, the Initiative described (but never specified) a new document format called VEX, which stands for Vulnerability Exploitability eXchange. Why did the effort initially focus on a document format? It was most likely because VEX was meant to be a “corrective” for SBOMs, and therefore needed to “follow” the distribution of SBOM documents.

The idea behind VEX was (and continues to be) to let the user know the exploitability status of each of the component vulnerabilities they have discovered in the National Vulnerability Database (NVD) or a similar database – that is, whether the component vulnerability is or isn’t exploitable in the product itself. In a VEX document, one or more vulnerabilities are listed, as well as the status of each vulnerability in one or more versions of a product.

When a component vulnerability isn’t exploitable in a particular product version, this may be due to various reasons. In some cases, it may be because the vulnerable component isn’t even included in that version (for example, the supplier has issued a VEX document saying that none of the versions of one of their products includes the log4j library, and therefore it isn’t vulnerable to the log4shell vulnerability). In other cases, it may be for another reason, like the supplier has already patched their product to remove that vulnerability (yet the vulnerable component remains in the product). Still another reason might be that the way the supplier implemented the vulnerable component in the product removed the vulnerability, either intentionally or inadvertently.

In these and similar cases, the supplier should notify their users that the vulnerability in question is not exploitable in their product, even though an SBOM for the product indicates the vulnerable component is included in the product/version. If the supplier doesn’t do this, their users may waste a lot of their and the help desk’s time trying to track down this vulnerability and demand to know the supplier will issue a patch for it. The more component vulnerabilities turn out not to be exploitable in the product itself, the more VEX documents will need to be issued, and with greater frequency.

What has been the experience with VEX so far? Given the above, many people might expect that suppliers would be issuing VEX documents to their end users (i.e., customers) in great quantities, since just about all such suppliers would be likely to feel the need to issue VEX notifications as often as possible.

However, suppliers today are not issuing VEX documents in anywhere near the volume required for VEX to be considered successful. In fact, it is likely that only a handful of suppliers are issuing VEXes to their customers at all. The fact that VEX documents aren’t being distributed in volume leads directly to the fact that SBOMs themselves are not being issued or consumed in anywhere near the volume required for them to be successful (except by developers, for their internal product vulnerability management purposes). Along with the “naming problem”, the author considers this to be one of the two “showstopper” problems preventing widespread distribution and use of SBOMs.

Why aren’t suppliers providing VEX documents to their customers? When asked this question, suppliers usually state that their customers aren’t asking for VEXes. Why aren’t the customers asking for them? The customers often point to the lack of easy-to-use, commercially supported tools as the reason for this. Specifically, there are no “complete” tools that a) ingest an SBOM for a particular software product/version, b) look up vulnerabilities for the components and track them over time, including checking daily for new vulnerabilities, and c) as VEX documents become available for that product/version, utilize them to determine which of the component vulnerabilities are exploitable in the product itself and which are not. While SBOMs can in some cases be used without a complete tool, there are few if any ways that a VEX document can be used without such a tool.

However, the lack of an SBOM/VEX consumption tool does not in itself explain why VEX documents are not being distributed to end users; instead, the lack of a tool is a symptom, not a cause of the problem. The author has identified two primary reasons why VEX documents aren’t being distributed.

First, the goal of VEX seems to have been misidentified. A software user’s primary concern, with respect to software component vulnerability management, should be identifying exploitable component vulnerabilities; after all, these are the vulnerabilities the user needs to worry about. However, the three VEX documents produced by CISA so far (all of which are available at https://www.cisa.gov/sbom), as well as an unpublished NTIA document (pages 8 and 9) from 2021, all describe VEX as primarily intended to identify component vulnerabilities that are not exploitable in a software product.

The reason for this assertion is probably that it would be risky to provide a document (machine readable or otherwise) to software customers that includes a list of exploitable (i.e., unpatched) vulnerabilities. No matter how many NDAs a customer may have signed, there is a non-zero chance that any document provided to any organization will fall into the wrong hands at some point. If the document lists exploitable vulnerabilities, the consequences of this happening might be very serious.

The second reason why VEX documents are not being distributed to end users is that, in order for the documents to fulfill their intended function (i.e., identifying non-exploitable component vulnerabilities in a software or intelligent device product/version), they will most likely need to be distributed in huge quantities. In fact, a single large supplier might need to issue thousands or even tens of thousands of VEX documents every day. Moreover, large software end users might have to receive and process thousands of VEX documents every day.

The following example illustrates why the above estimates are not at all unreasonable. The example assumes the following:

1.      A version of a software product (“product/version”) has 150 components (an industry average estimate), that are listed in a current SBOM.

2.      The National Vulnerability Database (or another major vulnerability database) lists one vulnerability for half of those components, for a total of 75 vulnerabilities.

3.      On any given day, the exploitability status of at least one of those vulnerabilities changes, for example from “under investigation” to “affected” or “not affected” (there can be changes in the list of vulnerabilities as well, including a new vulnerability for one of the components being identified in a vulnerability database). Since it is important that customers of a software product learn as soon as possible when any vulnerability that affects a component changes its exploitability status, this means the supplier should send out a new VEX document containing the notification of at least this status change as soon as they learn of the change. Thus, the supplier is likely to send out at least one VEX document for that product/version every day.

4.      A supplier needs to provide VEX information for every version of their product that is still being actively used. Even though the supplier might have previously issued hundreds of versions of a product (including patches and new builds, each of which should count as a new version), we will assume here that only ten of those versions are being actively used at any point in time. Since the supplier will issue one VEX document each day for each of those ten versions of the product, they will issue ten different VEX documents every day, for one product.

5.      Furthermore, suppose the supplier has 200 separate products (i.e., the products are separate enough to require their own SBOM. This includes different options within a single “product family”, e.g. multiuser vs. single user or English vs. Spanish). Since each of those products will require ten VEX documents a day, this means the supplier will need to issue 2,000 different VEX documents every day for those 200 products.

6.      Of course, having 200 separate products would probably put the supplier squarely in the smaller half of producers of software products and/or intelligent devices. If a supplier is five times as large and therefore sells 1,000 products, yet the other assumptions remain unchanged, this means the supplier will have to produce around 10,000 different VEX documents every day. Moreover, the very large software and device suppliers like Microsoft™ and Cisco™ will almost certainly need to produce some large multiple of that number.

 

Another consideration is that VEX documents will need to be “pushed” out to end users; it will never be enough simply to make them available on a portal. Moreover, to protect the information as much as possible, the documents should never be sent out via email; instead, they will need to be pushed directly to each customer’s SBOM/VEX consumption tool. Of course, doing this, even for 100 different VEX documents per day, will be a huge logistical challenge. But, if each of those documents needs to be sent to 1,000 customers, that means the supplier needs to send out 100,000 individual VEX documents – all to different IP addresses – every day. And if the supplier has to send out 10,000 individual documents per day (as described above) to 1,000 addresses each, that amounts to ten million individual VEX documents sent every day, mostly to different addresses. Clearly, this is not a sustainable situation.

Given these problems, the author has come to believe it was a mistake for VEX to be originally formulated as a document format. Instead, it should have been described as an internet-based service, which is accessible by an API utilized by an SBOM “consumption” tool (i.e., a tool utilized by software consumers for software component vulnerability management, not software suppliers – they have various tools available for that now).

Under this assumption, it will be reasonably safe for a supplier to pass information on exploitable, as well as non-exploitable, component vulnerabilities to their customers’ tools (which may be operated by a third party service provider).

Most importantly, this service would save software suppliers the huge amounts of time and effort required to send out perhaps tens or hundreds of thousands of individual VEX documents every day; in fact, given those huge amounts, it is hard to see that anything other than a service like this one would even be considered.

This solution would simply require a supplier to maintain and update a server, which the supplier’s customers could access at any time via an authenticated API. The server would store the VEX information to be provided through the API, including, for every product and version being actively used, each component vulnerability being tracked and its exploitability status (“affected”, “not affected”, “fixed” or “under investigation”), as well as status justifications and mitigations. The supplier would need to update this information only when it changes.

Of course, building and maintaining this server will require some effort from the supplier (although it will be several orders of magnitude less than what sending out huge numbers of VEX documents every day would require!). However, it is very likely that third party services will appear, that will construct cloud based “VEX servers”. These third party services will take the development and maintenance work out of the supplier’s hands; the third party will be able to utilize a single infrastructure to service hundreds or thousands of other suppliers as well. Of course, the supplier will need to update their own data themselves, but that is all they will have to do.

I believe that the “VEX Server” described above will enable software suppliers to share VEX information with their customers as frequently as needed (since the customer will decide how often they want to retrieve the information via the API), without imposing a huge logistics burden on the supplier. Of course, this will increase the availability and frequency of user access to VEX information, but even more importantly, it will increase distribution and use of SBOM information. This is because one of the two “showstopper” problems inhibiting widespread distribution and use of SBOMs by end users (along with the naming problem) is the fact that the document based VEX “solutions” currently being discussed are simply not realistic. I believe the solution described above is a realistic one.

Note there is a secondary VEX use case for which a document-based solution should be more than adequate. This is partly because this secondary case doesn’t carry the same urgency (and therefore the need for such frequently updated documents) as the primary use case described above. I’ll describe that case in a later post.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Friday, June 9, 2023

VEX Purpose and Use Cases

I’m pleased to report that I’ll be working with FOSSA, Inc. to develop blog posts and white papers on particularly important topics having to do with SBOM and VEX. If you don’t know FOSSA, they’re the only developer-native open source management platform. They have the broadest license inventory and vulnerabilities database available. Most interesting to me, the platform can be used to create, import, export and manage SBOMs.

I’m even more pleased to report that my first post for FOSSA, “VEX Purpose and Use Cases”, went live on their blog yesterday. It’s intended to be an introduction to the VEX concept for individuals who have some knowledge of SBOMs and vulnerability management. It includes discussions of the primary VEX use case, the fields in a VEX document, and what I see to be the future of VEX.

I recommend that you take a look at it! 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Sunday, June 4, 2023

How do you prioritize vulnerability patching when VEX is a consideration?

 

It’s safe to say that most organizations today that are concerned about patching vulnerable software on their networks have many more patches to apply than they have time and personnel to apply them. Of course, this isn’t a new problem, but it’s gotten worse. In fact, Chris Hughes published an (as usual) excellent blog post recently that mentioned “…a report from Rezilion and the Ponemon Institute showing that…more than half of organizations have more than 100,000 open vulnerabilities in their backlog…”

However, what is a new problem for these organizations is that, within the next 2-3 years, they will be receiving SBOMs regularly from at least some of their software suppliers. And they know those SBOMs are only going to make this problem worse, not better.

Why do they say this? To simplify the analysis, let’s say that each software product or software component will on average be found to have one vulnerability in an NVD search. If we start with the situation that almost 100% of organizations are in today – that they are not regularly receiving SBOMs for the software products used on their network – this means that, if the organization uses 1,000 products, they will usually have 1,000 vulnerabilities on their “to be patched” list. Of course, that doesn’t mean 1,000 devices to be patched. If each of those vulnerabilities is found on ten devices, there are actually 10,000 devices to be patched.

Since we’ve already stipulated that the organization is overwhelmed with their current vulnerability load, this means that 1,000 vulnerabilities is too many for them to handle at one time. Therefore, they’re probably doing what most organizations would do; performing triage on the vulnerabilities they have to patch, perhaps by breaking them up into three categories – high, medium and low priority. They will move heaven and earth to patch vulnerabilities in the high category. They’ll do their best to patch vulnerabilities in the medium category. And they’ll get around to the low priority vulnerabilities when and if they get the chance to do so, with no guarantee that they will be able to patch the lows at any time in the future.

Now, let’s assume the organization suddenly starts receiving regular SBOMs from their suppliers for one quarter of the software products installed on their network – that is, for 250 products. And let’s say the average SBOM identifies 50 components in a product, which is much less than the average of around 150. Since we’re assuming that components have on average one vulnerability, as do the products themselves (i.e., the “first party” code in the product), this means the organization now knows of 250 X 50 = 12,500 new vulnerabilities, plus the 1,000 vulnerabilities they already knew about. And, if our previous assumption that every vulnerability is found on ten devices still holds, this means there are now (12,500 + 1,000) * 10 = 135,000 devices to be patched (one hopes the organization has an automated solution for this, but, as any experienced patch management person can tell you, it’s impossible to automate 100% of patch applications).

Remember, we’ve already stipulated that the organization considers their current load of 1,000 unpatched vulnerabilities to be overwhelming, meaning there is no good way the security team will ever be able to patch all of them (this is because more vulnerabilities are being identified every day. By the time they patch the high priority vulnerabilities and maybe some of the medium ones, they’ll have a new batch of highs, which demands their attention before anything else). What will their security team say (that’s printable in a family blog post) when their boss lets them know that, because of this new thing called SBOMs, their number of unpatched vulnerabilities has grown from 1,000 to 13,500? And keep in mind that the above analysis makes assumptions that could turn out to be too low. The actual number could be much higher.

This shows that the only thing that, once SBOMs start being distributed to end user organizations (by which I mean all organizations except those whose primary business is developing software or maufacturing intelligent devices) at the needed frequency (i.e., with every new major or minor version), the only thing that will save vulnerability management teams in end user organizations  will be developing a strict methodology for triaging the vulnerabilities they face. While there are many such methodologies, one that a lot of organizations are using now is to triage vulnerabilities according to their EPSS scores or whether they’re in CISA’s KEV catalog. Both of these measures are based at least in part on whether actual exploits based on the vulnerability in question are taking place.

So now, the security team that thought 1,000 unpatched vulnerabilities was the workload from hell is faced with 13,500 unpatched vulnerabilities. What will they do? They’ll do the only two things they can do: a) ask for more head count, and b) re-prioritize. They might do the latter by first prioritizing the 13,500 vulnerabilities into high, medium and low; then they’ll take just the highs and re-prioritize them into high/high, medium/high and low high. Finally, they’ll concentrate on the high/highs and perhaps a few of the medium/highs. Those are probably all they’ll ever be able to patch.[i]

What happens when we introduce VEX into this picture? You might think that, since VEX has to do with exploitability, it won’t have much to add to the EPSS score or the Kevin catalog. However, if you think that, you’re wrong. This is because there are two types of exploitability. One is the VEX sense, which applies just to a single product and is binary; it answers the question, is CVE-2023-12345 exploitable in product A version 2.5? The answer is yes or no. Note that it doesn’t apply to more than one version of a product, unless the statement applies to a version range (which is currently only an option in the CycloneDX VEX format).

The second type is the exploitability discussed above, measured using (among other things) the EPSS score and KEV catalog. This is a percentage (0 to 100%) and applies to a vulnerability, not a product; indeed, this type of exploitability applies across all products.[ii] It answers the question, given the most up-to-date information on exploitation and availability of exploit code, how likely is it that CVE-2023-12345 can be used to compromise any software product containing that vulnerability?

Let’s get back to the original question: If a security team needs to prioritize the vulnerabilities it faces and just patch the most exploitable ones (in the second sense of the word), how can VEX help them do that, since it deals with a different type of exploitability? More succinctly, can VEX provide much help at all?

My answer to this question might surprise you: VEX can only play a secondary role in addressing the problem faced by the intrepid security team we’ve described above. The team needs to use EPSS, KEV, etc. to prioritize the vulnerabilities it’s addressing; VEX won’t help with that job.

However, VEX can play a secondary role that could in theory (but not in practice today, unfortunately) make the team’s problem more manageable than it would be otherwise. Remember that at least 90% (and maybe 95-97%) of vulnerabilities identified in components of a product that are listed in an SBOM will not be found to be exploitable in the product itself. If our team could learn, for every version of every software product that their organization uses, which component vulnerabilities (i.e., vulnerabilities in components listed in an SBOM, that can be found in the NVD or another vulnerability database) are not exploitable in every software product used by the organization, they could then remove those versions from the list of those to be patched for the non-exploitable vulnerability.

Let’s illustrate this with an example:

1.      Suppose the security team has identified and prioritized 1,000 vulnerabilities for devices on their network, as well as 12,500 vulnerabilities for components listed in an SBOM for one of the software product/versions installed on those devices (as in the case we described above). They have identified 250 “high/high” vulnerabilities, based on EPSS scores and the KEV catalog. Using our estimate of ten vulnerable devices for each vulnerability, this means the team needs to patch 2,500 devices, in order to patch every instance of each high/high vulnerability.

2.      Their initial patching push for the month will focus just on the high/highs. They will start by identifying every device on their network(s) on which a software product subject to at least one of the high/high vulnerabilities is installed. They will plan to patch each of these until they have patched every one of the high/high vulnerabilities, on every device on which that vulnerability is found. In other words, when they have finished the initial phase of the effort, they will have patched every instance of every high/high vulnerability.

3.      However, what if the organization were receiving, from the supplier of each of the 1,000 software products installed on their network, a revised SBOM corresponding to at least every major new version of the product? And on top of that, what if the supplier were sending a new VEX document whenever they have determined that a particular component vulnerability is exploitable (or not) in the product in which the component is included (hey, a guy can fantasize, can’t he)?

4.      Moreover, suppose each of the organization’s software suppliers is really thorough and, within say one month of the release of a new version of a product, they have determined the exploitability status (in the VEX sense, of course) of each component vulnerability in that version.

5.      Finally, after aggregating all the VEX information it has received about each product/version installed on its network, the security team realizes that 95% of all their component vulnerabilities are not exploitable in the full product; of course, this is probably close to the average number.

6.      When the security team realizes this, there will be great rejoicing. After all, they had at first thought they’d have to patch 2,500 machines just in order to patch all the high/high vulnerabilities. However, it turns out they only need to patch 5% of those machines, or 125. A nice improvement, n’est-ce pas?

7.      Furthermore, instead of having to patch 135,000 devices in order to eliminate every vulnerability in their backlog from the network, the team “only” has to patch 135,000 * 5% = 6,750 machines. That’s still a lot, of course, but what’s most striking is that the security team is now at the point where they can seriously discuss completely eliminating their vulnerability backlog, vs. at best patching all of the high/high and medium/high vulnerabilities.

Of course, I’ve already admitted that this vision is a fantasy. Neither SBOMs nor VEX documents are being provided to end users whenever needed (an SBOM is needed at least whenever a new version of the software is released, while a VEX document is needed whenever the exploitability status of a component vulnerability has changed).

Moreover, I don’t see that changing until both types of documents are regularly being released.[iii] VEX without SBOM is irrelevant, while SBOMs without VEX will never be accepted, because of the huge false positive vulnerability problem. This isn’t a chicken-or-egg problem. After the VEX problem is solved and the naming problem is substantially ameliorated (through changes to the NVD, primarily), SBOMs will start to be distributed in the required frequency and quantity. Fortunately, progress is being made on both fronts.



[i] There’s another way to triage their work, and that’s according to the importance of the device on which the software is running. For example, say that CVE-2023-12345 is at the top of the prioritization list, but it’s found on three types of devices. One is the computers used by the team that plans next month’s lunchroom meals (are there still lunchrooms?). The second is computers used in the finance department, although not for trading. The third type is devices that control the manufacturing process which brings in literally every dollar the company makes. 

It might seem like a no-brainer that even though the team would start their day by patching CVE-2023-12345, they would first patch the manufacturing machines, then the machines used by finance and lastly the menu people. However, even that isn’t a sure thing. There are all sorts of reasons why individual machines will be patched in a different order than others, which have nothing to do with prioritization by CVE or type of machine. 

[ii] Of course, in both types of exploitability, the words “other things being equal” apply. For example, if a device is isolated from the internet and located in a room that requires a retina scan and a smart key to enter, it’s likely it will be close to unexploitable in either sense. This means that in both types of exploitability, protections that need to be separately applied and will not always be in place in all environments should be ignored. 

[iii] I have given up on the idea that VEX documents will ever be released in the quantities that will ultimately be needed, which might be tens or even hundreds of VEX documents for a single product every day. This is why I’m focusing on the idea of an API by which users can query VEX information from a server maintained by a supplier, freeing the supplier from having to push out any VEX documents at all.

Saturday, June 3, 2023

To my new friends in Singapore


This blog has always had a good international readership, although most readers (i.e., those that access the posts directly or through LinkedIn, vs. email subscribers who read it in the email feed) have almost always been American. There have been a number of “surges” where one country’s readers suddenly start reading the blog; these never last more than a few days and usually don’t exceed American readership.

However, for the last six or seven days, Singapore has been the number one origin nation (by IP address, anyway) of page views on my blog. In fact, there have been at least a few days when the number of Singapore readers has been about double the maximum number of US viewers that I’ve ever received in one day.

Can someone tell me what’s going on? Obviously, this can’t be just random. But it also isn’t just that I wrote a post that hit a nerve with Singapore residents, since no post (including recent ones) has received an unusual number of pageviews.

Please drop me an email if you can explain this. However, I certainly welcome all readers!

Tom

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.