Monday, September 25, 2023

NERC CIP: Will FedRAMP save us?

One of the most important current questions in the NERC CIP community is how and when it will be “legal” to deploy medium and high impact BES Cyber Systems (BCS) in the cloud. It’s important to note that there is no CIP requirement that explicitly forbids a NERC entity from deploying high or medium impact BCS in the cloud. Rather, the problem is that a cloud service provider (CSP) would never be able to provide the evidence required for the entity to prove their compliance with some of the most important CIP requirements in an audit.

There have been many suggestions on how a CSP could provide such evidence. Perhaps the suggestion that comes up most often is something to the effect of, “Why don’t the auditors just allow the cloud service provider’s FedRAMP (or SOC 2) certification to constitute evidence that they maintain an equivalent or better level of security practices to what is required by the CIP requirements that apply to medium and/or high BCS?”

Of course, there isn’t much dispute whether the overall level of security maintained by the large CSPs, especially in the portion of their cloud infrastructure that is compliant with FedRAMP, is better than the level of security required by the NERC CIP requirements that apply to medium and/or high impact BCS: without much doubt, it is better. But, as everyone involved with NERC CIP compliance knows all too well, what matters is whether the entity – or in this case, the CSP acting on their behalf – has complied with the exact wording of each requirement. For prescriptive requirements like CIP-007 R2 (patch management) and CIP-010 R1 (configuration management), it would simply be impossible for a CSP to do that.

However, there are some requirements for which it might be possible for a CSP to provide compliance evidence. These are what I call risk-based requirements, although they don’t all use the word “risk”. Examples of this are CIP-007 R3 (malware prevention), CIP-010 R4 (Transient Cyber Assets), and CIP-011 R1 (information protection program). If the evidence were designed to show that the NERC entity has developed a plan with the CSP how to address the risks applicable to the requirement in question and the CSP has carried out the plan, then my guess is it might be accepted, without having to change the CIP requirements at all.

But the fact that there are some requirements for which it might be possible to provide appropriate compliance evidence is irrelevant in the big picture. There is simply no way the NERC entity would be found compliant with the prescriptive requirements if they deployed medium and/or high impact BCS in the cloud. And no NERC entity is going to agree to do something that is guaranteed to make them non-compliant with even one requirement.

So is the solution to rewrite all the CIP requirements so they are all risk-based and can easily be “made to fit” with the cloud? This is essentially what the CIP Modifications Standards Drafting Team started to do in 2018 (although they were targeting virtualization at the time, not the cloud) – that is, until some large NERC entities made it clear they didn’t want to have to toss their entire CIP compliance programs – with all of the software, training, etc. they had invested in for CIP compliance – and start over. I described this sad story in this post in 2020. Thus, it’s safe to say that requiring all NERC entities to rewrite their CIP compliance programs is a non-starter.

However, a recent SAR (standards authorization request) that was submitted to the NERC Standards Committee proposed a way around this problem: What if there were two CIP “tracks”? Track 1 would consist of exactly the same requirements that are in place today, but it would be made clear that they only apply to on-premises systems, not to systems deployed in the cloud. NERC entities that have any systems deployed on premises would follow that track for those systems.

Any entity that doesn’t want to place any BCS in the cloud would just follow the first track – so literally nothing would have to change in their CIP compliance program.  However, if an entity deploys any BCS in the cloud, they would follow a second compliance track for those systems (and also follow the first track for their on-premises systems). That track might start with a requirement that the CSP has an appropriate certification. There would then be requirements that aren’t found in the first CIP track because they only apply to CSPs, for example:

1.      The entity needs to demonstrate that the CSP has developed a plan to comply with the “Paige Thompson problem” – namely, the fact that a technical person who had recently been fired by their CSP was able to utilize her knowledge of a common customer misconfiguration to penetrate the cloud accounts of at least 30 customers of the CSP (by her reckoning), one of which was Capital One.

2.      The entity needs to demonstrate that the CSP adequately verifies the level of cybersecurity of third parties that sell services that run in the CSP’s cloud (the fact that one such access broker didn’t have good security may have led to the Russian attackers being able to penetrate the SolarWinds infrastructure and plant their malware in seven updates of the Orion platform in 2019 and 2020).

There are probably a lot more “uniquely cloud” risks that should be addressed in CIP requirements, which only apply to cloud based BCS. It’s not likely that FedRAMP has requirements that deal with these risks, but if it does, then the FedRAMP audit report might possibly be used as evidence of the entity’s compliance with these requirements. All of these questions would need to be addressed by a standards drafting team tasked with examining the risks of deploying medium and high impact BCS in the cloud and designing appropriate CIP requirements to mitigate those risks. This would be the new second track for NERC CIP.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Friday, September 22, 2023

An opportunity to get involved with fixing VEX

Regular readers of this blog (both of you!) know that for more than a year I have been concerned about the “showstopper” problems that are inhibiting wide (or even narrow) distribution and utilization of SBOMs by organizations whose primary function is not software development (you know, banks, restaurant chains, baseball teams, police departments, architecture firms…the rest of us).

I say “non-developers”, because there are some developers who are using SBOMs heavily now to manage risks in the products they’re developing. However, what is strange is that I honestly can’t identify a single developer that is regularly providing SBOMs to their customers (since an SBOM can no longer be trusted once the software has been upgraded or even patched, the developer needs to provide a new SBOM with at least every major and minor update of their product) – although I’ll admit I have no visibility into what military and intelligence agency suppliers are doing.

What’s holding SBOMs back? One is the naming problem. An informal group I lead, the SBOM Forum – which is, I’m happy to say, now the OWASP SBOM Forum[i]! – developed a paper almost exactly a year ago that describes how to significantly mitigate this problem in the NVD. The NIST team that runs the NVD has told us they would like to implement what we describe in the paper, but they obviously have a lot on their plate nowadays, such as wondering whether they’ll have a paycheck next week (for the second time this year, I might add).

Thus, while we will certainly engage with the NVD whenever they’re ready and able to do that, the OWASP SBOM Forum isn’t losing sight of the real goal: The US and the rest of the world need a truly global vulnerability database, which is maintained and funded internationally.

We’re now talking with people at ENISA (the EU cyber agency) about their potentially using our recommendations for NIST as the basic design for the vulnerability database they’re developing from scratch – and which they hope to have up and running in two years (this is eminently doable, in my opinion). The project is already funded (although probably not on the scale required for a global database) by Article 12 of the EU NIS 2 legislation, which came into effect last year.

Thus, we’re making progress on the naming problem. But, until this week, we hadn’t even started to address the second serious showstopper problem for SBOMs: the fact that, three years after the VEX idea was first discussed in the NTIA SBOM initiative, there is still no clarity on what VEX is and how tools for VEX production and consumption should work. Therefore, VEX is literally going nowhere at the moment.

However, there is clarity that SBOMs will never go very far until VEX is made to work. This is because suppliers have seen the statistics stating that over 90% of component vulnerabilities in a software product aren’t exploitable in the product itself. The suppliers are worried that, the day after they release their first SBOM, their help desk will be overwhelmed with calls and emails from angry customers demanding to know when they will patch CVE-2023-00666, which has a CVSS score of 10.0 and will probably get them fired if they don’t get it fixed by…how about 2PM this afternoon?

Of course, the help desk people will patiently explain to caller after caller that they don’t need to worry about this vulnerability because the vulnerable module in the library that contains the vulnerability was never used in their product. But maybe, after they reach the 70th such caller in one morning, they will begin to lose their patience…and by the end of the day they’ll submit their resignations.

Yet, I don’t know of a single supplier that has started to produce VEXes on anything but an experimental basis. Cisco recently announced they will start producing VEXes for customers (not for public viewing), but didn’t say how often they will be updated.

While it’s nice to have one-off experiments, VEXes will be needed even more frequently than SBOMs, since new vulnerabilities appear all the time and customers need to know about them in as close to real time as possible (which is why I’ve proposed a VEX server and API. These will eliminate, or at least greatly reduce, the need for VEX documents, as well as take a huge burden off the suppliers). It’s not at all unreasonable to expect VEXes to be updated or issued multiple times a day.

But even among the very small number of experimental VEX documents that have been produced and published, there is no consistency regarding what is covered in a VEX, or even the format for it. This isn’t surprising, given that there are two “platforms” on which VEX can be produced now - CycloneDX (CDX) and CSAF – and in neither one of them has a format for VEX been specified. This has led to wildly differing documents, all bearing the name “VEX”. Given that there is no consistency in the VEX documents being produced, is it any surprise that there are literally no tools available to ingest and utilize VEXes? Yes, there are plenty of tools that can read the JSON of a VEX and make it look a little prettier for the reader, but why should a supplier produce a machine-readable VEX document if it’s just going to be read by humans? They could save a lot of time (and some money) by putting what they have in a PDF and emailing that to their customers.

Last week, the OWASP SBOM Forum decided it was time for us to lead development of VEX “playbooks” – one for CycloneDX VEX and the other for CSAF VEX (the two platforms are so different that there could never be a single VEX playbook that will encompass both platforms). Each platform will have a Producer’s Playbook and a Consumer’s Playbook.

These playbooks will be intended mainly to describe to the toolmaker how VEX documents will be produced and utilized. They will be constructed with enough rigor and detail that there should be no question about what needs to be in the VEX and how it should be presented. Since we will be creating the producer and consumer playbooks at the same time, we will hopefully avoid the common pitfall of not having the production and consumption tool specs exactly in sync.

The two platforms will require very different levels of effort. CycloneDX VEX is much easier to understand than CSAF VEX, since it is built on the same platform that CDX SBOMs utilize (as well as CDX HBOMs, SaaSBOMs, MLBOMs, OBOMs, etc. See the CDX site for the full list of document types that are built on the same framework). There are already plenty of tools that can both create and read CDX documents; the problem is those tools need to be constrained so they only produce and consume VEX documents, rather than making the supplier figure out for itself how to create a VEX in CDX format.[ii]

However, understanding CSAF is much harder, since the spec is about 80-100 pages long and published in small type. If you look at the VEX profile in CSAF, it seems simple, since it just lists a small number of fields. However, it leaves out two mandatory fields (required in all CSAF documents) that are by far the hardest to understand in CSAF: “product tree” and “branches”. The description of these goes on for about 10-15 pages in the spec, and even then, it’s not likely that organizations with no experience in CSAF will be able to understand them quickly.

Fortunately, the group working on the playbooks will include staff members with long CSAF experience from Red Hat, Oracle and Schneider Electric, as well as perhaps staff members from Microsoft, Google and Cisco. Our challenge won’t be to understand CSAF, but to determine the optimal configuration of the “product tree” and “branches” fields (these seem to be related to other fields as well, although my understanding of CSAF is very limited). We may decide that we need to constrain those fields very tightly, perhaps by constraining the VEX document to only addressing one product (although it will have to address multiple versions of the product – and being able to represent and interpret version ranges automatically would be very desirable). Again, our goal is to have a VEX spec that won’t need much if any interpretation on the consumption side.

Why am I so worried about constraining the spec? It’s simple math: a consumption tool will need to be able to interpret every combination of options that might be thrown at it – and the number of combinations is the factorial of the number of independent options. If there are three options, there are 3X2X1=6 possible combinations. If there are 5 options, there are 5X4X3X2X1 options = 120 combinations.

How about if there are 10 options? There it gets trickier: there are 3.6 million possible combinations. And how about 20 options (which may well be the case if you consider the entire CSAF 2 spec)? Not a big deal…then there are only 2.4 quadrillion possible options! My guess is most developers aren’t going to feel like devoting the next couple of millennia to writing a CSAF consumption tool unless the options are severely constrained.

I’m telling you this because this will be a very interesting exercise as well as a very important one, since as I’ve said many times, “No VEX, no SBOMs.” VEX has to be fixed if we will ever have widespread distribution and use of SBOMs. Next week our group will organize itself, and we’ll aim for two weeks from now for our first meeting (I’ll send out a Doodle poll next week to judge a proper time). I think biweekly meetings will be fine. In fact, the documents we create – starting with a document on VEX use cases – will be available on Google Docs for comments and suggested edits at any time. If you want to join us or at least be on the mailing list, send me an email.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] However, please give us another two weeks to get our pages set up on both the OWASP and GitHub sites. 

[ii] Currently, the best way to determine how to create a VEX document on either platform is to go through the 10 or so examples, for each platform, found in the VEX Use Cases document from last year.

Wednesday, September 20, 2023

Two good resources on SBOMs from Fortress


Fortress Information Security, an organization I have worked for as a consultant for over three years, has recently developed two great resources for anyone interested in or involved with software bills of materials: a webinar and an e-book. I recommend both of these.

WEBINAR: Software Risks: Understanding your Software Supply Chain Security

The webinar takes place tomorrow at noon ET. The link for both registration and attendance is here. If you can’t attend tomorrow, you’ll be able to view the recording later.

The speakers are both very knowledgeable about SBOMs: One is Tom Pace, the Co-founder and CEO of NetRise (and a familiar name to readers of this blog. My most recent post featuring him is here) and Bryan Cowan of Fortress, who is Product Owner for SBOMs at Fortress.

The topics to be covered in the webinar “include SBOM adoption drivers, SBOM risk insights, example use cases, and a business case for managing risk with SBOMs.” The webinar will last between 40 and 60 minutes, although the last 20 minutes or so will be reserved for questions.

Whitepaper: SBOM Use Cases for Asset Owners

Bryan has been busy recently, since he co-authored (with Ty Short) the above white paper, which is available here. This takes a very different approach to SBOMs than just about anything I’ve seen in writing (including my posts) so far: Instead of focusing on the use cases of licensing (the original SBOM use case) and software vulnerability management (the use case behind Executive Order 14028 and most articles on SBOMs, including my posts), the paper is clearly based on real-world research into the possible uses of SBOMs by public and private organizations. The result is quite good and very readable.

I recommend you look at both of these! And I promise that neither one will require an excessive amount of your time.  

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, September 14, 2023

Cisco’s important VEX announcement

Warning: The post below starts on an optimistic note but ends on a pessimistic one. Parental discretion is advised.

Yesterday, Cisco made an important announcement, although people who aren’t VEX addicts probably didn’t see it and if they did, it didn’t register as important. I had been looking for the short announcement about VEX that Cisco made a few weeks ago and found it odd that it didn’t appear in searches on Google or the Cisco web site. However, I see now that yesterday’s announcement filled in the details of the previous announcement, which was just a placeholder for the real thing.

The announcement was written by Omar Santos of Cisco, who is also the leader of the OASIS CSAF project. CSAF is the leading general vulnerability reporting format, and the VEX “profile” in CSAF adapts that format to the more limited needs of VEX (mainly, it limits the large number of status designations in the full CSAF format to four: “affected” (i.e., exploitable), “not affected”, “fixed” and “under investigation”).

The announcement is of the “Cisco Vulnerability Repository” (CVR), although I want to point out that name is a mistake. The software world has enough vulnerabilities that we don’t need to have Cisco create a repository to protect them 😊! The best name would be “Cisco VEX Repository” (which doesn’t require a change of acronym, BTW), since that’s what it is. A second-best name would be “Cisco Vulnerability Advisory Repository” or just “Vulnerability Advisory Repository”.

In any case, why do I think this is a significant announcement? For one thing, this is just the second announcement by a major software or device supplier (Cisco is both, of course) that they will be producing VEX documents for customers regularly. The first announcement was by Red Hat in February; however, within the last two months Red Hat changed their mind about VEX (nothing wrong with that, of course!) and renamed the documents they were calling VEX back to their previous name: security advisories.

However, I believe that RH will soon announce VEX documents that are a lot like the ones Cisco just announced – since they are big supporters of CSAF themselves and they have a seat on the CSAF board.

The announcement lists three types of vulnerabilities that will trigger VEX notifications:

  • Vulnerabilities found in the Cybersecurity and Infrastructure Security Agency (CISA)’s Known Exploited Vulnerabilities (KEV) Catalog
  • Vulnerabilities that Cisco has determined to be high-risk
  • Vulnerabilities whose status has been requested using CVR 

This clarifies what Cisco's VEX strategy is (and I think it's a good one): Instead of putting out VEX documents that discuss the status of multiple vulnerabilities in a particular product (which is the strategy that a lot of people have been thinking of), this lists the status (exploitable or not) of one significant vulnerability in one or more products. The example in the announcement utilizes the "product family" capability of CSAF, since it applies to the entire Catalyst 9800 Series family of wireless controllers.

Perhaps the biggest problem with VEX is that user organizations already face a huge backlog of unpatched software vulnerabilities. If they are to learn about new ones, they need to know just the ones that pose a significant risk to them. Meanwhile, software suppliers who have looked at creating VEXes (and very few have even looked at it, I’m sure) have realized that determining whether a particular component vulnerability is exploitable in a particular product/version is hard and time-intensive (unless they want to adapt the “shortcut” I described in my most recent post, which will require them to patch a lot more component vulnerabilities than otherwise. I don’t know whether that’s a good trade-off or not, of course; that will depend on lots of different factors).

Cisco is clearly quite aware of this problem, which is why they have stated up front that they will only consider certain component vulnerabilities for remediation, meaning they won’t even bother to determine whether any others are exploitable or not. I think other suppliers should emulate what Cisco has done here: state very clearly which vulnerabilities they will consider for remediation and which they won’t. Their customers should appreciate this, since they won’t be burdened with a lot of patches that mitigate little risk (of course, the risk to Cisco in this strategy is that a high-risk vulnerability like log4shell will slip through their net and not be patched. But given the level of awareness of vulnerabilities in general nowadays, it’s likely that any such vulnerability will quickly be pointed to by their customers).

Regarding Cisco’s three criteria for considering a vulnerability:

1.      Almost by definition, the KEV catalog is a list of the most risky vulnerabilities

2.      Cisco’s own list of high-risk vulnerabilities is important, since presumably Cisco knows best which risks to its products are high ones; but,

3.      I especially like the third category: "vulnerabilities whose status has been requested using CVR". This gives Cisco’s customers a voice in deciding which vulnerabilities to consider for remediation. Moreover, notice the announcement doesn’t say that customers will only be able to access VEX information for products they own (in fact, it doesn’t sound like Cisco would even have any way to implement that policy, since they’re reporting VEX information by vulnerability, not by product). And because a large percentage of all organizations in the world are customers of at least one Cisco product or service, in theory they will all be able both to make suggestions for vulnerabilities to address and to see all VEX notifications (although I admit I’m guessing this is the case).

So I think this announcement might set a pattern for a lot of other organizations; at least I hope it does.

But there’s a downside to this announcement as well. It can be summarized in three questions:

1. Omar Santos is chairman of the CSAF committee. Pete Allor of Red Hat is one of the most active participants on that committee. Oracle has also started putting out CSAF VEX documents, and they also have a long history with CSAF and its predecessor format CVRF. That’s great.

However, CSAF is an extremely complex format, as a quick skim through the 100-page-plus[i] draft specification will quickly reveal. Even more importantly, figuring out how to fill out the mandatory “product tree” and “branches” fields requires close study of about 15 pages of very dense text within the CSAF format, and even then there are so many options available that a supplier that doesn’t benefit from long experience with CVRF/CSAF is unlikely to devote the time to do that. After all, nobody has a gun to their head, requiring them to produce VEX documents at all, let alone CSAF VEXes. Why go through the aggravation?

Aha, but you don’t need to know all of CSAF to create a VEX in that format. What about the VEX profile? That limits the fields you need to put in the document (which is good), but it omits mention of the need for the “baseline” elements required in every CSAF document, including product tree and branches.

2. Then, what about a tool to create a VEX document in CSAF? There are no VEX-specific tools; in fact, I know of no tools that create any type of CSAF document – meaning they ask the supplier questions about what they want to include (products, vulnerabilities, etc.) and then put them into a CSAF document, requiring no direct CSAF knowledge on the VEX author’s side. The only CSAF tool I know of is Secvisogram. This is a good editor, but it requires knowledge of the CSAF format. Again, it will be quite a hard row for an organization to come up to speed on creating CSAF documents of any type, unless they have somebody that can devote a substantial block of their time to learning the format and creating documents with it – using Secvisogram, faute de mieux.

3. But let’s say a supplier has created a CSAF VEX document? What are their customers going to do with it? Are there low-cost, commercially-supported tools available that will ingest SBOMs and CSAF VEX documents, then provide the user with a continually updated list of exploitable component vulnerabilities in a product/version they use, which has a recent SBOM? No, there aren’t. How about open source tools that do that? Again, no.

So, how about any tools that utilize CSAF documents for security analysis purposes? Here is the official list of CSAF tools. Do you see anything that meets that criterion?[ii] I don’t.

In fact, I’ve asked several large suppliers that produce CSAF documents (whether VEX or just vulnerability advisories – which is the main use case for CSAF, of course) what their users do with them. They honestly don’t know – except that some customers may have developed their own tooling for that purpose.

So, here are the two big problems with CSAF VEX (these each have several parts, and a small number of those parts also apply to the CycloneDX VEX format. However, I know it is many times easier both to create and to utilize CDX VEX documents than CSAF VEX documents):

1.      Any software or device supplier that wants to learn how to create VEX documents in CSAF needs to be prepared to spend a lot of time doing so. I doubt many suppliers will be interested in doing that.

2.      Those suppliers that distribute VEX documents would be better off developing a PDF format and sending PDF VEXes out by email (or just putting them on a portal).  I know one very large device maker that is doing that with SBOMs: They produce both JSON-based and PDF-based SPDX SBOMs and make them both available on their portal. They report much better uptake for the PDF files.

So – as often happens in this blog – we have run up into one of the “showstopper” problems for SBOMs and VEX: the lack of tooling, especially the lack of low cost, commercially supported tools that are essential for widespread use of both SBOMs and VEX documents by organizations whose primary business isn’t software development (i.e., at least 98% of organizations, public and private, worldwide).

Can this problem be solved? It can be partially mitigated, but substantial mitigation (I won’t even use the word “full”) also requires addressing the other showstopper problems, especially the naming problem. One partial mitigation would be a “playbook” for CSAF VEX, like the “playbooks” that were developed during the NTIA SBOM initiative (here are the two most important of these: for producers and for consumers).

I know that a draft playbook for Cyclone DX VEX has already been developed. However, I don’t know of any current attempt to develop a playbook for CSAF VEX. One is sorely needed.

I told you this post would be depressing, but you didn’t believe me!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I’m guessing at the page count, since the spec doesn't have page numbers. 

[ii] I believe there are a few vulnerability scanning tools that can ingest CSAF files and use the information in them to remove non-exploitable vulnerabilities from scanning results. This is certainly a useful result, but it has nothing to do with SBOMs or product components.

Thursday, September 7, 2023

Redefining VEX Part I

This is the first in a two-part series about VEX. Currently, VEX is going nowhere fast. I now know of just one organization – Cisco - that has even announced they’re producing VEXes for their products, although the only examples they linked to in their announcement were the same CSAF examples that appeared in CISA’s VEX Use Cases document, published in April 2022 (which also had CycloneDX VEX examples).

Another organization that was producing documents that they called “VEX” – Red Hat – has changed their mind about whether they are really VEXes, and has changed their name back to their original name: security advisories. They explained this to the SBOM Forum in a presentation more than a month ago, but they also stated they will be putting out a new type of document soon that they are calling a VEX (and since the only published VEX definition – near the beginning of the same Use Cases document – essentially just says a VEX makes statements about vulnerabilities, it doesn’t make sense now to even debate what is or isn’t a VEX. If the author calls it a VEX, it’s a VEX. End of story).

I want to point out that I don’t blame Red Hat for any of this, since they have always been a leader in producing machine readable vulnerability notifications (you’ll notice that the notifications on the page I referenced above go back to 2001), as well as doing lots of work of benefit to the software vulnerability management community. For instance, they sit on the boards of both the OASIS CSAF group and CVE.org.

Moreover, they are one of a handful of Root CVE Numbering Authorities (CNA) and are available to help open source software projects report their own vulnerabilities, as well as proactively reporting vulnerabilities in their own products. Pete Allor, Director of Product Security for Red Hat (and an active member of the SBOM Forum), told me once that Red Hat almost considers this their duty, because of their unique relationship with the open source community.

But other than Red Hat and Cisco, I know of no organization that has made any public announcement about producing VEX documents, and I have seen no other VEX documents from any source, except for a few demonstration documents.[i]

One of the biggest problems with VEX is that proving that a vulnerability has “not affected” status (which is, of course, the main reason that VEX was developed) often requires proving a negative. For example, if the supplier believes there is no “code path” that would allow an attacker to reach the vulnerable code (in order to exploit it), they would literally need to be able to show that none of the perhaps thousands of paths an attacker might utilize to reach that code will allow them to reach it, in order to “prove” the vulnerability isn’t exploitable in the product.

However, the five “status justifications” offer a way out of this problem. Whenever the status of a vulnerability in VEX is “not affected”, a status justification needs to be provided to justify the “not affected” status. Note that these are called “justifications” in CycloneDX VEX, and there are nine of them, vs. five in CSAF. The CDX justifications are a superset of the CSAF status justifications and were developed almost two years before the CSAF ones.

While three of the status justifications suffer from the need to prove a negative (they are “Vulnerable_code_cannot_be_controlled_by_adversary”, “Vulnerable_code_not_in_execute_path”, and “Inline_mitigations_already_exist”), two of them do not. They are “Component_not_present” (meaning that the vulnerable component that appeared in the SBOM isn’t actually in the product) and “Vulnerable_code_not_present” (meaning that, even though the component is present in the product, in this case the vulnerable code has been removed from the component).

Neither of the latter two status justifications requires proving a negative. If someone is skeptical about either justification, it won’t be hard for the supplier to prove it is valid. If a supplier asserts that either one of these is the reason why they used “not affected” status, they aren’t likely to be questioned.

Therefore, a way to “tighten up” the use of “not affected” in VEX would be to require that it only be used in a case where the reason for the “not affected” designation is either “Component_not_present” or “Vulnerable_code_not_present”. In all other cases, every vulnerability should be given the status of “affected”.

Of course, there may be some suppliers who aren’t bothered by the fact that three of the status justifications require proving a negative – plus some of their customers may be quite happy to accept all five status justifications as valid.

In this case, there wouldn’t need to be any change from what is now assumed in VEX: that all five of the status justifications are valid. In fact, when there is tooling available to utilize VEX information, there could be a simple software option: Those software users (e.g. hospitals, electric utilities, and military contractors) that believe they require a higher assurance level can choose the first option described, in which there are only two valid status justifications. Those users that are comfortable with the “looser” standard can accept all five status justifications, which will probably be the default option when and if VEX “consumption” tooling appears.

How about the nine “justifications” in CycloneDX? They are:

  • code_not_present = the code has been removed or tree-shaked.
  • code_not_reachable = the vulnerable code is not invoked at runtime.
  • requires_configuration = exploitability requires a configurable option to be set/unset.
  • requires_dependency = exploitability requires a dependency that is not present.
  • requires_environment = exploitability requires a certain environment which is not present.
  • protected_by_compiler = exploitability requires a compiler flag to be set/unset.
  • protected_at_runtime = exploits are prevented at runtime.
  • protected_at_perimeter = attacks are blocked at physical, logical, or network perimeter.
  • protected_by_mitigating_control = preventative measures have been implemented that reduce the likelihood and/or impact of the vulnerability

The CDX equivalent of “component not present” is “requires dependency”. The CDX equivalent of “vulnerable code not present” is “code not present”. High-assurance users might want to require that only these two justifications be allowed to accompany a “not affected” status, while the other seven justifications would require “affected” status.

How should these two options be positioned – i.e., the high assurance and “normal assurance” VEX options? When and if end user VEX “consumption” tooling appears, it should offer both options, although in general it would be better to make the “normal” configuration (i.e., all five status justifications are acceptable for “not affected” status) the default, with the “high assurance” option (i.e., only the “component not present” and “vulnerable code not present” justifications are acceptable for “not affected” status) available for organizations that believe they need that.

Perhaps the best aspect of this tooling flexibility is that it places no burden on the supplier. As long as they provide one of the standard five CSAF status justifications (or one of the nine CDX justifications) whenever the status of a vulnerability is listed as “not affected”, the end user will make the decision, through their tooling choice, of how it will be interpreted.

I feel that uncertainty regarding questions like the above is inhibiting production of VEX documents by suppliers; what I’m suggesting here may somewhat alleviate that uncertainty, although it is hardly the only problem with VEX. In the exciting Part II of this series, I’ll tell you about another important development affecting VEX, which may also help to make VEX more palatable to suppliers.


[i] You may have heard of OpenVEX, which is sometimes pointed to as a third VEX format. I think it’s a good format, but it’s for a different use case than VEX was originally developed for: clarifying for software uses which component vulnerabilities they learn about through an SBOM should not be a cause for concern. 

OpenVEX addresses the use case of software scanners that throw off false positive vulnerability findings; it allows a supplier to list – in a machine readable format - typical false positive results for their products. A scanner vendor can consume these documents regularly, and thus not show those false positives in their scan results. That’s a great use case and I imagine OpenVEX does it well, but it has nothing to do with SBOMs. 

OpenVEX can be used for the “original” VEX purpose as well, but it’s limited for that because it doesn’t treat version numbers independently of the product names. Instead, it creates synthetic “products” that combine the product name and the version number. This makes it impossible to create version ranges, which are essential for the original VEX use case (I explained that limitation in this post). But since, as I just said, there’s no specific definition of what a VEX is, I’m fine if OpenVEX keeps using the name for their product.

 

Saturday, September 2, 2023

A power grid attack has caused the first "homicide by power outage"

An article in today’s Washington Post led off with this sentence: “An elderly North Carolina woman’s death when her oxygen machine failed during a power outage has been ruled a homicide by the state medical examiner and blamed on what authorities said was intentional gunfire that hit power substations in her area.”

The “intentional gunfire” in question occurred during shooting attacks on two power distribution substations in Moore County, NC on December 3, 2022. They left over 40,000 people without power for multiple days (which itself was a problem, since there’s so much redundancy built into the power grid that any outage should be remediated within hours, not days. At least, that’s the idea…). I wrote about these attacks in this post on Dec. 5 and this one on Dec. 7 (a date that is remembered for the far more serious attack on Pearl Harbor). These attacks were imitated soon afterwards in Oregon and Washington state, although no outage was caused there.

In the second post, I included this sentence, “(The outage) certainly caused damage, and people were injured in car crashes…. We may hear later about people on oxygen at home, etc. that were victims as well.” I don’t claim prescience, but that seems to be exactly what happened in this case. An elderly woman’s oxygen compression machine, which helped her breathe at night, didn’t work because of the outage; she died as a result. I agree that’s murder, given that the outage was human-caused.

Of course, nobody has been charged with murder because the police have so far not found whoever shot up the substations – and of course, they may never find them. But at least this serves as a warning to whoever thinks that causing a power outage is a way to make a political point. This was the reigning theory in December for the attacks in all three states.

These attacks were coordinated. People get drunk and shoot up substations all the time, but they don’t make a point of shooting up multiple substations at once – and they don’t usually know exactly where to shoot in order to cause an outage. These people seemed to know that.

I think that this type of attack – a physical attack on distribution substations – is the most likely to cause an outage compared to any cyberattack, or even compared to a physical attack on generation or transmission substations (which are much better protected now, thanks to the Metcalf attack and the subsequent development and implementation of NERC CIP-014). Of course, an attack on distribution would almost by definition never cause a cascading outage like the August 2013 event (which wasn’t caused by a deliberate attack. Why bother to attack if carelessness and lax procedures can produce the same effect?).

In any case, I’m glad the medical examiner ruled the woman’s death a homicide. That alone may prevent some future attacks.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.