Sunday, August 28, 2022

You MUST comply with this voluntary cyber regulation! (plus, Et tu, NIST?)


You’re in for a big treat today. This post actually contains two posts, but it’s available at the usual low low price of Tom Alrich’s blog!

Bad things happen when government agencies try to take the easy route, rather than the right one, to achieve their goals. And if the agency is trying to get private organizations to do something that they just know is the right thing for them to do, they’re even more tempted than normally to take the easy route. After all, their goals are righteous! How can anybody complain if they’re just doing everything they can to achieve those goals?

Thus, it was with a shudder of recognition that I read in the Washington Post last Friday a story about a “background[i] press briefing” conducted Tuesday evening by the White House. The briefing was titled “Improving Cybersecurity of U.S. Critical Infrastructure”. What more worthy goal could there be than that? The article included this quotation from the briefing:

These may be voluntary, but we hope and expect that all responsible critical infrastructure owners and operators will apply them.  We can’t stress it enough that they owe that to the Americans that they serve for these critical services to have more resilience.

The briefing also included this sentence:

The President is essentially saying, “We expect responsible owners and operators to meet these performance goals.  We will look to you to implement this.”

The message is quite clear: “We’re talking about voluntary regulations for critical infrastructure, but if you know what’s good for you, Mr./MS Critical Infrastructure Operator, you’ll comply with these regulations.”

George Orwell couldn’t have said it any better:

“War is peace.”

“Slavery is freedom.”

“Ignorance is strength.”

“Voluntary is compulsory.”

 

Et tu, NIST?

So, what are the mandatory “voluntary performance goals” that the “senior administration official”, who was probably Jen Easterly, outlined in the “background briefing”, a verbatim transcript of which was published the next day?[ii] On hearing that these goals were a NIST product, I would normally have thought they’d be what NIST always publishes: compliance frameworks (like SP 800-53 or the CSF) that don’t prescribe any particular actions, but which require federal agencies to consider the risks they face in particular areas and identify controls they can implement to mitigate those risks. At the same time, the agency is free to allocate resources among the risks in a way that will result in the maximum possible risk reduction, given the available resources.

I like that approach, since IMO it’s axiomatic that the best way to regulate cybersecurity is to require the organization to take into account all the important cyber risks they face, then decide for themselves how to allocate their limited human and financial resources (I have yet to learn of any organization, outside possibly the 3-letter agencies, that has unlimited funds available for cybersecurity, or anything else, for that matter) so that the greatest possible amount of risk is mitigated, given the available resources.

However, while the “performance goals” that are listed are all individually good, they’re not presented as part of a framework, like what I just described. Rather, they’re a set of prescriptive requirements. While the organization does have some discretion in how they implement each requirement, they don’t have any discretion in whether or not they implement it – even if it makes no sense in their environment.

A good example of this is item 1.1 on page 3:

System-enforced policy that prevents future logins (for some minimum time, or until re-enabled by a privileged user) after 5 or fewer failed login attempts. This configuration should be enabled when available on an asset.

In IT environments, this might seem like a perfectly reasonable requirement. If someone can’t remember their password and can’t login after five failed attempts, they should be locked out. It may be inconvenient for them, but hopefully they’ll learn their lesson after this experience.

However, OT networks for critical infrastructure (which these goals are aimed at, of course) can’t afford to teach a lesson to somebody who is suffering temporary memory loss due to a stressful situation (like alarms going off left and right in a control center); they need to maintain operations. Often, the most critical systems in a control center don’t even have passwords (or it will be a shared password that all of the operators know).

Another good example is item 3.2 on page 8:

All data in transit and at rest are encrypted by an appropriately strong algorithm

- No critical data should be stored in plain text.

- Utilization of transport layer security (TLS) to protect data in transit when technically feasible.

- Prioritize for upgrade or replacement of assets that do not support modern symmetric encryption (AES)

This would be a great requirement for an IT network at a bank. But OT networks aren’t used to store data, other than the data required to maintain the critical process that the organization is responsible for (the uninterrupted flow of natural gas in a pipeline, the continued operation of an automobile manufacturing line, etc.). If those data are encrypted and the control room operators need to first remember complex passwords in order to decrypt the data needed to control operations, it’s very possible that an emergency will turn into a disaster.

From my experience with the power industry, I know that, even in some of the most secure control centers, systems are usually not encrypted, unless they’re not needed for real-time operations. While there is a new NERC CIP standard that requires encryption of data transmitted between control centers, there is no requirement for encrypting data at rest in control centers (although the Federal Energy Regulatory Commission did try to require exactly that in Order 822, which led to NERC CIP version 6. They were later talked out of that idea).

It's unfortunate that NIST – which I’ve always thought to be the most important advocate of the idea of taking a controls framework approach to cybersecurity – now seems to be trying out the cookie-cutter one-size-fits-all approach to cybersecurity regulation. My, how the great have fallen.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] The word “background” alone should have clued me in to the fact that something unusual was afoot in this briefing. The Director of CISA is announcing a major new program, yet she has to disguise herself as if she were leaking some deep, dark secrets. I wonder if she wore a false mustache?

[ii] You may note that all of the items in quotes in this sentence are lies.

Tuesday, August 23, 2022

“Free from all vulnerabilities", and other fables my mother told me

 

Walter Haydock continues to write great articles about vulnerability management. His latest is a case in point. The post is ostensibly about the National Defense Authorization Act (NDAA) for 2023 (which passed the House a month or two ago), but he provides good advice for any organization that’s concerned about software vulnerabilities, which should be just about every organization on the planet.

The provision of the Act that he focuses on specifies contract language for DHS, including…get ready for it… that the vendor must provide “a planned bill of materials when submitting a bid proposal” and “A certification that each item listed on the submitted bill of materials is free from all known vulnerabilities or defects affecting the security of the end product or service…” 

Of course! After all, what could be simpler? All the supplier has to do is certify that not a single third party component in their product – and none of the “first party” code written by the supplier itself – contains any vulnerability at all (and if the product includes thousands of components – still no problem!). Moreover, this applies to all vulnerabilities, regardless of whether the CVSS score is 0 or 10 or the EPSS (exploitability) score is 0 or 1. It will be easy as pie for any supplier to make this certification; they just have to write “I so certify” (or words to that effect) on a piece of paper or digital document and send it in. But making it truthfully? Ahh, that will be harder...

Furthermore, Walter points out there’s no provision for monitoring software for new vulnerabilities after it’s installed. He says “…this provision appears to over-index on the (perceived) security of a piece of software at a single point in time, without any concerns about what happens after the software goes into operation.” What? Do you mean to tell me that the number of possible software vulnerabilities wasn’t fixed when Alan Turing first described (unintentionally) the idea of programmable computers in his great 1936 paper, “On computable numbers…” – despite the fact that the “computer” he described required an infinite paper tape and would never finish a calculation? Walter, why didn’t you tell the House committee that drafted the bill that new software vulnerabilities are identified every day, if not every hour?

Of course, this isn’t a mere omission. Whoever drafted this provision of the Act obviously didn’t understand that, far from having a fixed security posture from birth, software develops vulnerabilities all the time, as researchers (and attackers, who are also performing “research”) discover snippets of code that used to be considered benign, but now…aren’t. In 2021, 20,000 new CVEs were identified. That works out to two per hour.

After all, that’s what vulnerability management is about: learning about newly identified vulnerabilities that apply to the software your organization utilizes and taking steps to patch or otherwise mitigate the small minority of vulnerabilities that you determine pose a risk to your organization, while ignoring the overwhelming majority that don’t.

Walter continues, “There appears (in the required contract language) to be no ability on the part of the vendor to accept risk stemming from software vulnerabilities, even when providing a justification to the government. The vendor can only ‘mitigate, repair, or resolve’ such issues.” Frankly, I can see some misguided corporate lawyer putting together contractual requirements like these. But someone working for Congress should presumably have access to the best cybersecurity advice available. Didn’t they even think to ask someone knowledgeable whether what they were writing made sense? Guess not.

However, I disagree with Walter in the last part of his post. He says:

At a minimum, this legislation will make it basically mandatory to use machine-readable Vulnerability Exploitability eXchange (VEX) reports about issues identified in software products used by the government. Any non-automated process would break down easily and likely run afoul of the law’s requirements. Without the widespread use of such standards and tools, software bills of materials (SBOMs) are not likely to see significant adoption, due to the fact that vendors will be flooded with inquiries about false positive vulnerabilities in their products. Thus, this bill may help to motivate the introduction of new techniques for communicating about the exploitability of vulnerabilities in software while increasing transparency for the entire ecosystem.

I approve of his obvious wish to see SBOMs adopted in significant numbers, as well as his observation that this will never happen unless there are also VEX documents available to warn users about the huge percentage of non-exploitable component vulnerabilities (my analogy is that, if SBOMs are the wheels of software supply chain cybersecurity, VEXes are the lubricant. The SBOM wheels will stop turning very quickly if not lubricated with VEXes). And I agree with him that it would be nice if this bill (it won’t be a law until the Senate approves it, of course) led to wide use of VEXes, which might then make SBOMs widely used as well.

However, I really don’t think this small provision of the NDAA will ever get implemented, even if it passes the Senate in its current form (and I hope it gets removed by the Senate. Saying nothing at all about the subject of software vulnerabilities would be much better than requiring compliance with nonsensical rules). But even if it does pass, I predict it won’t be implemented in practice, even if it is nominally a law.

Why do I say this? It’s because I’ve seen previously how a regulation[i] can literally not make sense but still be implemented – and still be “complied with”. Would you like to know how I think this happened? ...I didn’t think so, but I’ll tell you anyway. It happened because the regulators and the entities regulated developed a tacit understanding that the standards were basically unambiguous, and therefore compliance could be objectively verified. This is a variation of the “Don’t ask, don’t tell” policy that the Clinton administration came up with to address another problem having to do with the military.[ii]

Works like a charm. You ought to try it. The only problem is it's corrosive to the idea that laws are supposed to be complied with as written. However, if you don't care about the rule of law, then you should support this provision in the bill!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] NERC CIP version 5. If you search on that phrase in my blog, I’m sure you’ll find it mentioned in at least 400 of the (as of now) 938 posts I’ve published. 

[ii] The Soviet Union was the showplace for this sort of thing, since almost no law or regulation could be complied with as written. There was a saying among the workers, “They pretend to pay us, and we pretend to work.”

Wednesday, August 17, 2022

It seems some suppliers aren’t that good at writing patches

 

My brother Bill Alrich sent me this link this week, regarding a presentation at Black Hat last week. I’ll let you read it, but I find it appalling. The gist of it is that the quality of security patches is declining, so that more and more of them are bypassed. This means the supplier has to re-do the patch (and of course, their users have to apply the new patch), because the original patch didn’t really address the whole problem it was meant to address.

Why is this happening? It seems that suppliers, pressed for time and resources, are cutting corners by not spending the time required to look for the root cause of the bug and patching that. So a researcher (or an attacker, of course) can easily go around the patch, because they do have the resources to look for the root cause. Of course, when this happens, the supplier probably ends up spending more time on the vulnerability than if they’d done it right the first time, since they have to create two patches, not just one. It’s better to do a good job the first time…

But that’s easier said than done; I realize suppliers are under pressure from all sides. One thing that can help is they can consult with their customers and try to draw clear lines regarding which vulnerabilities are worth patching and which aren’t; then they shouldn’t be afraid to tell their customers (in a VEX, or just in an email), “This vulnerability doesn’t meet the severity threshold we agreed on, so we won’t patch it. Here are several steps you can take to mitigate the risk posed by this vulnerability…”

Of course, the problem is determining what that threshold should be. I’d like to say it should be a CVSS score above say 4.0; however, there are a lot of problems with CVSS scores in general, since they’re based on both exploitability and impact – but impact is very much dependent on the software impacted and how it’s used. Is it used to run the office’s March Madness pool? That’s probably a low impact use. But, how about if the software is used to run a process that could kill people if it’s misused? That’s a high impact use. Yet, the CVSS score would be the same in both cases.

Exploitability is a different story. The more exploitable the vulnerability, the bigger risk it poses (since its likelihood of exploitation increases, no matter how it’s used). Here, you have to remember that there are two types of exploitability, as discussed in this post. One is the exploitability that is described in a VEX document; it’s an absolute exploitability, based solely on the characteristics of the product itself. Either the vulnerability can be exploited or it can’t.

While some customers might object, if the supplier issues a VEX that says the status of the vulnerability – in one or more versions of the product – is “not affected” in a CSAF VEX or “unexploitable” in a CycloneDX VEX, IMO they’re justified in saying they won’t patch the vulnerability (although the supplier might offer a “high assurance” patching option for customers like military contractors or hospitals that want most vulnerabilities to be patched, period).

The other type of exploitability is what’s found in the EPSS score, discussed eloquently by Walter Haydock. This looks at such factors as the availability of exploit code and whether there have been exploit attempts. Of course, these have nothing to do with the product itself, but are applicable to all products and users. So a supplier might have a discussion with their users about a) whether EPSS score is good enough for the purpose or whether the users should construct their own score using some other combination of factors, and if so, b) what would be an appropriate threshold level.

This is why it’s dangerous to require software suppliers – especially in contract language – to patch all vulnerabilities. Sure they’ll develop a patch, but if someone is going to figure out how to bypass it in a week, was it really worth developing in the first place?

PS: Walter Haydock posted the following comment on this post in LinkedIn:

Thanks for the shout out. I would still say that exploitability can be described in a probabilistic fashion (e.g. a number 0-1) in every case.

You allude to it in your article, but certain organizations might want to be “extra sure” something a certain product is not_affected by a vulnerability. Unless they have zero legitimate reason for this ask, then I think it’s fair to consider that a vendor VEX statesmen shouldn’t be taken as a completely binary value.

I replied:

Thanks, Walter. The VEX spec has no provision for anything but a binary designation. However, there's certainly no reason that anyone has to place absolute faith in the supplier's affected/not affected designation.

This is why both VEX specs include a set of machine-readable "justifications" for different exploitability use cases. The CDX VEX justifications are:
"code_not_present"
"code_not_reachable"
"requires_configuration"
"requires_dependency"
"requires_environment"
"protected_by_compiler"
"protected_at_runtime"
"protected_at_perimeter"
"protected_by_mitigating_control"

Let's say someone (like a certain large hospital organization I know of) doesn't trust the "code_not_reachable" justification. They could set their tool so that it would treat a "not_affected" status with that justification as the same as "affected".

Of course, if they don't trust the supplier as far as they could throw them, they might not believe anything they say. At that point, you have to ask, "Why the heck are you buying from them, anyway?" But it's still a binary designation, not a probability.

I was going to include this discussion in the post, but I got lazy and didn't. I should have known that you would call me out on this.

PPS: Dale Peterson put this comment on the post in LinkedIn:

I think it is worse than your headline (and many will only read the headline). It is often deception rather than a lack of skill.

Some vendor's patch is changing enough so the poc exploit doesn't work. Not fixing the root cause. Back when we did vuln finding/exploit writing we saw this, and it was often trivial to change the exploit so it would work again.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, August 14, 2022

What's the difference between a VEX and a vulnerability notification?


Many people confuse VEXes with vulnerability notifications and wonder why there needs to be a VEX format at all. After all, software suppliers have been providing notifications of new vulnerabilities to their users for years. These notifications have been provided in many human-readable forms, including email text, PDF attachments, web pages, etc. They have also been provided in machine-readable forms, including the CVRF format[i] and its successor, CSAF[ii] (the general vulnerability reporting format on which one of the two VEX formats is based).

The VEX concept arose because of the need to provide negative vulnerability notifications: Instead of stating that a particular product and version(s) are affected by a particular vulnerability (CVE), a VEX indicates that the product and version(s) are not affected by the vulnerability. Is this the difference between a VEX and a “traditional” vulnerability notification?

The answer is no, for two reasons: First, A VEX document can provide a positive vulnerability notification, simply by changing the status designation in a negative statement like “Product X version Y is not_affected by CVE-2022-12345” to “Product X version Y is affected by CVE-2022-12345”.

Second, a negative notification can be provided in the same way as any positive notification is now delivered; a VEX isn’t required for this. For example, instead of issuing an email stating that a particular vulnerability is exploitable in their product (and therefore customers need to apply a patch or upgrade), a supplier can send an email stating, “None of our products is subject to the Ripple 20 vulnerabilities.” In fact, it is safe to say there is nothing that a VEX document can do that a “traditional” vulnerability notification cannot do; why are they both needed?

The difference between a VEX and a traditional vulnerability notification has nothing to do with their respective formats. Rather, the difference between the two is due to the fact that a VEX is designed to address a particular use case: the case in which a software user has utilized the list of components from an SBOM to find vulnerabilities that might be exploitable in a software product (or intelligent device) operated by their organization.

The VEX identifies vulnerabilities that are a) listed in a vulnerability database as applicable to a component of the product, yet b) are not exploitable in the product, taken as a whole. A software tool that ingests SBOMs and VEX documents would utilize the VEX documents to “trim down” the list of vulnerabilities identified for components found in the SBOM, so that the only vulnerabilities remaining on the list are exploitable ones[iii].

Another way that VEX documents differ from traditional vulnerability notifications is that a VEX only makes sense if it is “read” by a software tool. This is because, if a software product just contains a small number of components and the supplier only needs to notify their users that a few vulnerabilities are not exploitable in the product, it would be much easier for the supplier to do this in an email.

Under these assumptions, users could simply maintain a spreadsheet listing all the vulnerabilities identified (in the NVD or another vulnerability database) for components of the product. When they receive an email saying a vulnerability isn’t exploitable in the product, they would simply delete the line in the spreadsheet where that vulnerability is listed. Therefore, at any point in time, the spreadsheet would list all the component vulnerabilities currently deemed exploitable.

However, the average software product contains around 150 components and many products – including some commonly-used ones – contain thousands of components. Clearly, trying to manage exploitable component vulnerabilities using emails and spreadsheets will be impossible when products like these are encountered.

Of course, VEX documents can be used for other purposes than negative notifications (like the one just described). For example, a VEX document can be used to notify customers that one or more versions of a product are vulnerable to a serious vulnerability, as well as provide a notice of a patch and/or upgrade path to mitigate the vulnerability. However, given that almost every software supplier already has some means to notify its customers that a serious vulnerability is exploitable and a patch is available for it, why would a supplier want to use a VEX to convey this notification?

While this is speculation, it is possible that some suppliers will use VEXes for patch notifications like the above (as well as other positive vulnerability notifications), because they a) want to provide a machine-readable notification to their users and b) they are already providing VEXes for “negative vulnerability notification” purposes – meaning they have some experience with the format. Therefore, they might be more inclined to experiment with a new use case for a format (VEX) they already understand, rather than experiment with both a new use case and a new format.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[iii] Of course, this is an aspirational goal. It will require any supplier that provides SBOMs to their customers also to publish a VEX as soon as they discover that a vulnerability listed in the NVD for a component isn’t exploitable in the product. This will place a burden on suppliers and may dissuade some suppliers from producing SBOMs at all, at least in the near term. This is why I have proposed “real-time VEX”, as a way to greatly lessen the burden on suppliers of providing negative vulnerability notifications to their customers. See this post: https://tomalrichblog.blogspot.com/2022/08/whats-best-sbom-no-sbom-whats-best-vex.html

Wednesday, August 10, 2022

What’s the best SBOM? No SBOM. What’s the best VEX? No VEX.


Steve Springett, leader of the OWASP CycloneDX and Dependency-Track projects (as well as the SCVS project, which I haven’t written about previously but is also important for software security in general and SBOMs in particular), and I attend a weekly meeting where we and others discuss problems that are holding back widespread distribution and use of SBOMs. We’ve been working for five months on the naming problem, and will soon publish our proposed close-to-solution to the problem (plus, we’re getting the different parts in motion to implement that solution, although ultimately we can only present our case to the people who need to make it happen).

At last week’s meeting, Steve asked what we would tackle next (assuming we want to tackle anything next, but there’s no question that we want to do that. There are still some serious problems that need to be addressed before SBOMs can be widely used. No rest for the weary - although I find this quite interesting and invigorating).

I suggested an idea I brought up in a post a while ago (which was also based on something Steve said. No coincidence, there – BTW, what I’m talking about here appeared at the end of that post), which I call “real-time VEX”. IMHO, the idea is quite simple:

1.      Because about 90-95% of vulnerabilities in software components aren’t exploitable in the product itself, end users are going to be reluctant to utilize SBOMs for vulnerability management purposes (probably the most important security use case for SBOMs, although there are other use cases as well), unless they can be assured that they’re not spending 95% of their time chasing false positive vulnerability identifications.

2.      About two years ago, a working group within the NTIA Software Component Transparency Initiative started work on a format for documents that would allow a supplier (or perhaps another party) to notify customers that vulnerability CVE-2022-12345 isn’t exploitable in their Product X Version Y, even though the NVD shows the CVE is found in Component A, which is included in X.

3.      The name VEX was chosen (or more accurately, stumbled into) for the document; then the words “Vulnerability Exploitability Exchange” were identified as a good enough explanation of what VEX stands for (people never like it if you tell them that an acronym stands for NBI, Nothing but Initials, but that often happens. In fact, “CVE” is officially NBI now, even though it did stand for something initially).

4.      Now there are two standards for creating VEX documents: one based on CycloneDX and the other based on the OASIS CSAF vulnerability reporting standard. However, no software supplier is currently providing VEX documents to their customers, except perhaps in one or two isolated cases. The biggest reason for this is that – even more than SBOM – VEXes need to be consumed by automated tools. There is currently only one tool that consumes SBOMs and VEXes, which is Dependency-Track (I believe there will be one or two third-party services available in the near future, that will also do this). Another important reason why VEX isn’t exactly taking the world by storm is that there is no playbook currently available that describes how to create and consume VEXes (although the CISA VEX committee is starting work on one now).

5.      The big problem with the fact that VEX isn’t taking off is that this inhibits interest in SBOMs as well, for the simple reason that people don’t like the idea of wasting 95% of their time looking for vulnerabilities that aren’t there. Given that they can’t be sure that won’t happen if they start looking for vulnerabilities in components of a product, they aren’t very interested in SBOMs or VEXes.

I thought until recently that a good bit of education would help suppliers and consumers of software become comfortable with VEXes. That is coming and will help, but there’s another problem: While an SBOM will have to be produced whenever the software changes (including patches and new builds), VEXes will need to come out in much greater numbers, in order for users to be assured that the list of vulnerabilities that their SBOM/VEX tool shows them, for a particular product and version, consists mostly of exploitable vulnerabilities that are worth pursuing, rather than non-exploitable vulnerabilities that aren’t. As I discussed in this post, a large company would probably need to receive literally thousands of VEX documents every day (if not every hour), in order to reach the goal of narrowing down the component vulnerabilities that they take seriously, to only the exploitable ones.

The problem is made even worse by the fact that there is real urgency with VEXes (even more so than with SBOMs). Most users will want to know as soon as possible which component vulnerabilities aren’t exploitable, especially if these are serious vulnerabilities. If a supplier learns that a serious component vulnerability isn’t exploitable in one of their products, their users aren’t going to react favorably if the supplier waits until the end of the month, when they aggregate all of their VEX notices in one document; it’s likely that users will already have wasted a lot of time looking for that vulnerability in the product. In fact, the users probably won’t be satisfied if they hear the supplier issues a VEX once a week, or even once a day. They will want to know as soon as the supplier decides that a vulnerability isn’t exploitable.

But here’s the problem with that: Until there are automated tools to produce VEXes (and currently there are none, in either format), putting out a VEX is going to require a non-trivial amount of time. In fact, people are going to need to produce VEXes day and night, in order to keep up with demand (I believe the term for these people is “VEX workers”). My guess is that anybody with that job will either quit or commit suicide within months.

However, when I wrote the post I referred to earlier, I had just come to realize that there may be a solution to our problems. We don’t need a VEX (or any other document) for the primary VEX use case: informing users of non-exploitable component vulnerabilities found in products they use. Instead, we need:

1.      A “VEX server” at the supplier (or at a third party that offers this as a service on behalf of the supplier) will list, for each currently-used version of software products (or intelligent devices) produced by the supplier, all of the component vulnerabilities currently shown in the National Vulnerability Database (NVD), as well as perhaps other databases like OSS Index (which is better for open source components, I’m told). The components behind each list will be those in the SBOM for that version (since every change in the software should produce a new version as well as a new SBOM, there should always only be one SBOM for each version).      

      The VEX server will continually search the vuln databases for new vulnerabilities for each product and version (if this sounds like a lot of work, consider that Dependency-Track is now being used 200 million times a month to search OSS Index for vulnerabilities applicable to open source components – and that works out to about 20 billion SBOM-based vulnerability searches a month, just with that one open source product. Also, consider that security-conscious suppliers should already be continually searching for new component vulnerabilities in their products. They should already have this information), and continually update these lists.

2.      As the supplier identifies a component vulnerability that isn’t exploitable in a particular version of their product, they will set a “not exploitable” flag for that vulnerability, product, and version in the VEX server.

3.      Meanwhile, end users (or a third party performing this service on their behalf) will operate software tools that track all vulnerabilities listed for components of any software product/version used by their organization.  Whenever they identify a component vulnerability in the NVD, they will add it to a list (rather, their tool will do this) of vulnerabilities for the product/version (this list might be included in a vulnerability or configuration management database). Each vulnerability will initially be assumed to be exploitable (there is no other choice, if you think about it).

4.      There will be an API that can be called, as often as required, by software operated by end users (such as Dependency-Track), or by a third party acting on behalf of users. It will send three pieces of information to the appropriate server:

a.      Product name (Product A)

b.      Product version string (Version X.Y.Z)

c.      CVE number (CVE-2022-12345)

5.      These three pieces of information will be interpreted by the VEX server to mean, “What is the exploitability status of CVE-2022-12345 in Product A Version X.Y.Z?”

6.      The server will return one of two pieces of information in response:

a.      Not exploitable

b.      Under investigation

7.      If the answer is a, the user tool will flag CVE-2022-12345 as not exploitable in Product A Version X.Y.Z. If the answer is b, the tool will do nothing, and the CVE will continue to be listed as exploitable.

8.      It will be worthwhile for the user tool to use the API to query the VEX Server at least once a day, and more often if possible. This way, the list of exploitable component vulnerabilities will, at any particular time on any day, contain only the vulnerabilities that the supplier has not yet decided are not exploitable (note that I don't think the API should optionally return a third piece of information "Exploitable". Unless there's a patch available for the CVE in question - and it's been advertised to all customers of the product - that would be a dangerous thing to do).

Of course, there will always be some vulnerabilities that are currently listed as exploitable, that will later be found not to be so. Thus, the vulnerability management staff will still waste some time looking for non-exploitable vulnerabilities in the products the organization uses. However, they will at least know that the list they have reflects the supplier’s most current knowledge.

Note there are still a number of VEX use cases that need to be handled with a VEX document, for example a “positive” vulnerability announcement (e.g. “Product A Version B is vulnerable to CVE-2022-YYYYY and the patch is available at (URL)”), or a more complex scenario like patching a longstanding vulnerability in the product. But IMHO the most time-critical and high-volume use case, can be handled much more efficiently by “real-time VEX”.

To continue my story, when I’d described my idea in the meeting (in much less detail than above), Steve pointed out that he’s sometimes asked in interviews where he thinks SBOMs will be in ten years. He always says, “I hope they won’t be needed at all.” He went on to explain what he meant (I’m paraphrasing him): "What really matters is if the supplier (or a third party) has the data the consumer needs when they need it. If that can be exchanged purely using APIs, that’s great. We don’t need either SBOMs or VEXes.”

While I agree with Steve that the goal is not to need either SBOMs or VEXes, I think the low-hanging fruit at the moment is what I’ve just described. This may be one thing (along with solving the naming problem and the availability of user tools and third-party services) that opens the floodgates for use of SBOMs and VEXes for software vulnerability management. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, August 8, 2022

9 ½ years!


Fortress Information Security (a company with which I have a consulting relationship) has been analyzing a lot of SBOMs lately. They recently showed me the results of their analysis of the components in a particular security product (which one hopes would have better security than most software products). They were careful not to tell me that these results are somehow typical of the “average” software product, since determining that would require a much larger, controlled study. But they did say they’d examined a number of other products whose analysis yields similar results.

They had other statistics in their report, but what struck me most were the figures for “vulnerability age” – i.e., the number of days since the vulnerability was published (which I assume means when a CVE number was assigned to the vulnerability). The report said:

  • There were 141 vulnerable components found in the product, out of 699 total components.
  • There were 77 unique vulnerabilities in the product, which of course is a lot (this means the average vulnerability was identified in a little less than two different components).
  • What’s worse is the fact that the average time since the disclosure of the vulnerabilities was 2,978 days, which is more than 8 years.[1]
  • Vulnerabilities were grouped by severity, based on their CVSS score category: low, medium, high or critical. The analysis found:

1.      There were 5 low-severity vulnerabilities. Their average age was 2801 days or 7.67 years.

2.      There were 61 medium-severity vulnerabilities, whose average age was 2,987 days or 8.18 years.

3.      There were 9 high-severity vulnerabilities, whose average age was 3,058 days or 8.37 years.

4.      There were 2 critical vulnerabilities, whose average age was 3,447 days or 9.44 years. In other words, the most serious vulnerabilities were the oldest, with the average serious vulnerability having been identified 9 ½ years ago.

Here are some informal conclusions from this data (drawn by the author, not the service provider that performed the analysis of the SBOM):

1.      It is very hard to understand how a supplier can allow 77 vulnerabilities of any severity, let alone high or critical severity, to remain unpatched after even 7 ½ years (as is true for the 5 low-severity vulnerabilities), let alone for 9 ½ years (which is the case for the 2 critical vulnerabilities). If the organization doing a procurement is seriously considering purchasing a product from the supplier that compiled this record, they should have a serious discussion with them about the maximum time they will allow any vulnerability to remain in a product.

2.      It is odd, to say the least, that, in this analysis, the length of time that a vulnerable component has been left in a product increased with the severity of the vulnerability. The organization needs to discuss this with the supplier as well.

3.      Of course, it is possible that some or all of the vulnerable components were obtained only recently, for example in the last year. In other words, a supplier might defend themselves by pointing out that a critical vulnerability was in the component for 8 ½ years, before they introduced the component into their product a year ago. However, then the question becomes why the supplier even included such a component in their product in the first place. More generally, a discussion of this problem should lead to a broader discussion of the supplier’s own supply chain risk management policies. Do they even check for vulnerabilities in components before they include them in their products?

In other words, just the above data from an SBOM should lead to the acquiring organization having a serious talk with this supplier before they finalize the procurement. They need to discuss the supplier’s component acquisition policies, as well as their component vulnerability management policies. Moreover, the procurement contract might well include the requirement that the supplier patch certain component vulnerabilities (at least the critical and high ones), or even replace the vulnerable components altogether.


[1] The analysis didn’t address the question of exploitability of vulnerabilities. It is certain that at least some of these vulnerabilities are not actually exploitable in the product, meaning it might not be important to patch them initially. However, they author’s opinion is that no component vulnerability should be left unpatched for more than say three years, whether or not it is initially deemed not to be exploitable by the supplier. The reason for this is that a future code change in the product might inadvertently change the vulnerability’s status in the product from not exploitable to exploitable.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.