Friday, January 21, 2022

Let’s not replicate NERC CIP’s problems for pipelines.


I’m thinking of renaming this “Mariam Baksh’s Blog”, since once again the NextGov reporter has written a really interesting story on cyber goings-on in Washington (my last post was based on one of her articles, and I’ve written a few others before that). Mariam isn’t the kind of reporter whose goal is to write the best story about something that everybody else is also writing about. Instead, she is constantly digging up interesting stories where I would have never even dreamed to look.

The story this time is an outgrowth of the Colonial Pipeline attack. Once the TSA issued a cyber directive to the pipeline industry a few months after the attack, most reporters assumed the problems were solved. After all, you solve cyber problems with regulations, right?

I might have assumed the same thing, except a kind person sent me a redacted[i] copy of the directive, which I wrote about in this post. What did I think of the directive? Well, I thought the font it was written in was attractive, but it was all downhill from there. However, I have found a great use for the directive: In February, I’ve been invited to speak on cyber regulation for a seminar at Case Western Reserve University. At first, I was going to talk about lessons learned from various compliance regimes: NERC CIP (which I know most about), PCI, HIPAA, CMMC, etc. But after reviewing my post on the TSA directive, I realized I hit the gold mine in that one: Just about everything the TSA could have done wrong, they did. They should stick to telling people to take their shoes off in airports. It’s the perfect pedagogical example of how not to develop cybersecurity regulations!

And maybe they will, since Mariam’s article points out that Rep. Bobby Rush[ii] of Illinois has introduced legislation that would take the job of cyber regulation for pipelines away from the TSA and vest it in a new “Energy Product Reliability Organization”. This would be modeled on the North American Electric Reliability Corporation (NERC), which develops and audits reliability standards for the North American electric power industry, under the supervision of the Federal Energy Regulatory Commission (FERC). NERC is referred to as the “Electric Reliability Organization” in the Electric Power Act of 2005, which set up this unusual regulatory structure.

The best known of NERC’s standards are the 12 cybersecurity standards in the CIP family. And clearly, these standards have a good reputation on Capitol Hill. This was reinforced by FERC Chairman Richard Glick’s testimony at a hearing on Wednesday, when he was asked by Rep. Frank Pallone, “…do you think that the industry led stakeholder process established by Chairman Rush's legislation would likewise be a successful mechanism for protecting the reliability of the oil and gas infrastructure?” Glick replied, “I believe so. The electricity model has worked very well … and I believe a similar model will work with pipeline reliability.”

I don’t deny that the NERC CIP standards have made the North American electric grid much more secure than it would be without the standards. On the other hand, there are some serious problems with the CIP compliance regime, which I wouldn’t want to see replicated for the pipeline industry:

1.      The standards should be risk-based, like CIP-013, CIP-012, CIP-003 R2, CIP-010 R4 and CIP-011 R1. This means they should not prescribe particular actions. Instead, they should require the entity to develop a plan to manage the risks posed by a certain set of threats - e.g. supply chain cyber threats or threats due to software vulnerabilities. Then the entity needs to implement that plan. In drawing up the plan, it should be up to the entity to decide the best way to manage the risks, but there will be a single set of approved guidelines for what should be in the plan (something that is missing with CIP-013). Prescriptive requirements, like CIP-007 R2 and CIP-010 R1, are tremendously expensive to comply with, relative to the risk that is mitigated. Explicitly risk-based requirements are much more efficient, and are probably more effective, since the entity doesn’t have to spend so much money and time on activities that do very little to improve security.

2.      Auditing should be based on how well the plan followed the guidelines. Of course, this isn’t the up-or-down, black-or-white criterion that some people (including some NERC CIP auditors, although I believe that sort of thinking is disappearing, thank goodness) think should be the basis for all auditing. If an entity has missed something in the guidelines, but it seems to be an honest mistake, the auditor should work with them to correct the problem (in fact, the auditor should work with them in advance to make sure the plan is good to begin with. This is currently not officially allowed under NERC, due to the supposed risk to “auditor independence”, a term that’s found nowhere in the NERC Rules of Procedure or GAGAS).

3.      In other words, auditors should be partners. They actually are partners nowadays, but when they do this, they’re officially violating the rules – and note that they’ll still never write down any compliance advice they may give and they’ll always say it’s their personal opinion - meaning you can’t count on the next auditor saying the same thing).

4.      Auditing should also be based on how well the plan was implemented. This is where I think auditing actually should be black & white. Once the entity has created the plan, they need to follow it. If they decide something needs to be changed, they should change the plan and document why they made the change. But they shouldn’t just deviate from the plan as it’s currently written (I believe this is how CIP-013 is audited, as far as I know. The entity can make a change to the plan whenever they want, but they need to document the change, and then follow it).

5.      Identification of new risks to address in the standards needs to be divorced from the standards development process. When a new area of risk is identified as important, entities should immediately be required to develop a plan to mitigate those risks and follow that plan – this shouldn’t wait literally years for a new standard to be developed and implemented.

6.      NERC standards development proceeds in geologic time – and because of that, a number of important areas of cyber risk have never been addressed by CIP, since nobody wants to go through the process of developing a new standard. For example, where are the CIP standards that address ransomware, phishing, and APTs? These risks have been around for at least a decade, yet a new standard has never even been proposed for any of them, let alone written. And how long does it take for a new standard to appear after the risk first appears? The risk caused by use of “visiting” laptops on corporate networks has been well known since the late 1990s. When did a requirement for “Transient Cyber Assets” take affect? 2017.

7.      There needs to be a representative body – with representatives from industry stakeholders, the regulators, and perhaps even Congress and the general public – that meets maybe twice a year to identify important new risks that have arisen, as well as to identify risks that are no longer serious. If the body decides a new risk needs to be addressed, a new standard should be created, the mechanics of which would be exactly the same as in the other standards. Only the name of the risk and the guidelines for the plan would differ from one standard to another. So no new requirements need to be developed.

8.      The unit of compliance for the standards shouldn’t be a device (physical or virtual) – specifically, a Cyber Asset, as it is in the current CIP standards. Instead, it should be a system. As I have pointed out several times before, putting BES Cyber Systems in the cloud (e.g. outsourced SCADA) is now impossible, since doing so would put the entity in violation of twenty or so CIP requirements – simply because they would never be able to furnish the evidence required for compliance. Basing the standards on systems (and a BCS is defined just as an aggregation of devices, nothing more) would allow cloud-based systems, containers, and more to be in scope for CIP.

So I wish the new Energy Product Reliability Organization good luck. I think having stakeholder involvement is crucial to having successful cyber standards for critical infrastructure. But don’t spoil it all by not taking account of the lessons learned from the NERC CIP experience.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they necessarily shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Making the directive secret was ridiculous, and almost by itself guaranteed it would fail, as I discussed in my post on the directive. But just for good measure (and to make sure that there was no possibility at all that the directive would succeed), the TSA violated just about every other principle of good cybersecurity standards development. They're nothing if not thorough!

[ii] Rush just recently announced his retirement, after 30 years representing Chicago’s 1st Congressional District in the House. He served well and happens to be the only person who ever beat Barack Obama in an election (Obama challenged him for his House seat in 2000). He was a co-founder (with Fred Hampton) of the Illinois branch of the Black Panther party in 1968.

Tuesday, January 18, 2022

Gee, thanks Vlad! You’re such a swell guy…


Mariam Baksh of NextGov published – as usual – a very interesting article on Jan. 16, which begins with this paragraph:

A senior administration official put questionable timing aside and commended the Kremlin’s arrest Friday of individuals Russian officials say comprise the notorious REvil ransomware group, which U.S. officials have attributed to attacks on critical infrastructure.  

“Questionable timing” indeed! Putin is poised with a knife to the Ukraine’s throat and threatening to send troops to Venezuela and Cuba to threaten the US – so of course this is a great time to thank him for his noble efforts against REvil.

Let me suggest that the real question is this: Seven months ago, Biden - after the Kaseya attacks, which were instigated by REvil) – said (quoting from WaPo) “Putin must put an immediate stop to this activity, or Biden’s administration will take ‘any necessary action’ to stop it.” Why is the administration now taking credit for the fact that Putin finally acted, when Putin’s people certainly knew all along who needed to be arrested (because the Russian intelligence services collaborate with those people all the time, and the US intelligence services had given them a list of names)?

And why, after calling for an immediate stop to “this activity” (which, in case you hadn’t noticed, didn’t bring Russian ransomware activity to a hard stop last July), didn’t Biden keep the pressure on Putin all this time? And given that Putin obviously didn’t pay any attention at all to Biden’s order last July, why doesn’t this “senior administration official” even think, “Hey, the fact that he’s finally arresting the REvil guys now is probably not because he’s been listening to us. It’s because he wants to look as good as he can in other areas, while he’s issuing a new ultimatum to Biden to abandon the Ukraine to him”?

Perhaps the senior official is Secretary of State Blinken, who in September asserted that “no one in the U.S. government expected the Afghan government to fall as quickly as it did.” Of course, there was no way the administration could possibly have known that the Taliban wouldn’t keep their promise of a cease-fire. After all, the Taliban are honorable men. Why Blinken wasn’t fired after that debacle would be a mystery to me, were it not clear that there are lots of others in the administration who also think their job is to claim credit for successes, rather than actually be successful.

Of course, the senior administration official’s comments really aren’t about Putin at all. They’re a vain attempt to at least get some good news out about the administration, since lately all the news has been bad. But frankly, the fact that this clown – whoever he or she is – is trying to turn the fact that Putin completely stiffed Biden for seven months and is now doing what Biden ordered only because he’s trying to divert attention from a much bigger transgression he wants to commit, only shows how weak and clueless they are. And unfortunately, that’s not news.

A much better thing to say – and not through an anonymous spokesperson – would have been “We wish to ‘congratulate’ Mr. Putin on finally taking an itty bitty step to combat one of the many evils Russia has inflicted on the world in recent years. Now here are some more steps Mr. Putin must take, and the consequences that will follow if he doesn’t (BTW, this time we mean it about the consequences):

1.      Adequately compensate the families of victims and governments for their losses in the shooting down of flight MH17 in 2014 or face a ban on all Russian aircraft in international airspace.

2.      Compensate Maersk and the other companies worldwide that lost an estimated $10 billion in the NotPetya attack, or risk being cut off from the SWIFT international funds transfer system.

3.      Compensate the victims (especially government agencies) of the SolarWinds and Kaseya attacks and arrest the perpetrators of both of these (who are either Russian government employees or well known to them) or face an order to US financial institutions and citizens to divest themselves of their Russian bonds and not own them in the future.

And speaking of Russian attacks, here’s another idea: Why don’t we investigate the assertions made by the CIA and FBI in the last “annual” Worldwide Threat Assessment in 2019, to the effect that the Russians had penetrated the US power grid and could cause outages at any time? There’s never even been an investigation of those statements. And there haven’t been any more WTAs since that one.

I guess that’s one way to solve the problem of bad press.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they necessarily shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, January 16, 2022

A great opportunity


Do you remember the NERC Supply Chain Working Group (SCWG)? We’ve been in operation since 2018 and are still going strong. Our current project is to update the set of supply chain guidelines that we drew up in 2019. Drafting them was a great experience, since we had a lot of people participating on the calls. I anticipate that re-drafting them (or perhaps adding to them, since my guess is most of them don’t need to be amended very much) will be equally interesting.

One important goal of the guidelines is to keep them short enough to be read in maybe 20 minutes. The original guidelines were all 3-5 pages long, and I anticipate the new ones will be 5-8 pages. I’ll point out something you may already know: It’s a lot harder to write a short paper than a long one. That may be why the drafting meetings were so interesting – when every word has to count, you need to decide what has to be said and say it as economically as possible (which of course is why blog posts sometimes go on and on – since the blogger knows there’s no limit imposed and he doesn’t have to be careful about his words. Of course, I don’t personally know any bloggers like that, but I’m told they’re out there).

I ended up leading the drafting of two of those papers, and I’ll be leading it for the new versions of both papers (unless someone else would like to take the lead on one of them and I’ll just participate. That would be fine with me). We need to get the drafts done by I believe mid-March, so we won’t have many meetings to draft them. I will have 3-4 meetings for each paper, and I’ll alternate weeks, so there will be a meting for one paper the first week and the other the second, etc. Of course, you can come to as many meetings as you want – although of course there won’t be any recordings made.

The two papers for which I’ll lead the drafting are both found here, along with some slides from presentations we did in Orlando in June 2019. There are also recordings of webinars we did (one for each paper) in 2020, which were well attended (since people weren’t going to a lot of in-person meetings in April through June of 2020). My two papers are Cyber Security Risk Management Lifecycle (which should really be called Supply Chain Cyber Risk Management Lifecyle – we’re not trying to tackle the entire field of cybersecurity in five pages!) and Vendor Risk Management Lifecycle.

You’re welcome to attend any or all of the meetings; I’m not going to keep attendance. You don’t have to be a member of the SCWG, although we’ll probably enroll you anyway. This will entitle you to all the benefits and emoluments of membership - priceless. We’re even waiving the normal $1,000 signup fee…😊

Also, even though these meetings are mostly populated with electric power industry types, I can assure you there’s nothing we’ll be talking about that’s specific to the power industry. So anyone is welcome to participate, both suppliers and end users. Note that, even though the papers are NERC publications, they aren't compliance guidance for CIP-013; they're simply best practices for supply chain cyber risk management.

We’ve put out Doodle polls to find the best time for both series of meetings. The poll for the Cyber Risk Management meetings is here and for Vendor Risk Management is here.[i]

I hate to pressure you, but for the Cyber Risk Management meetings, we’ll have to decide the time by tomorrow afternoon, since we want to have the first meeting this week. So if you’re interested in that, please sign up asap (note we won’t meet on Monday the 17th, although if Monday is the best for someone, we could later move the meeting to Monday if the rest agree). For Vendor Risk Management, we’ll meet next week (the week of the 24th), so we’ll wait a few days before we set that time. We’ll send everybody who’s participated in the poll an invitation for the series.

I hope you can help us out!

Need CIP-013 compliance help, either from the NERC entity or the vendor side? I’ve worked with a number of electric utilities on CIP-013 compliance, and I’m currently working with two vendors to the industry. Drop me an email and we can talk!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] If you want to sign up for one of the three other papers that we’re going to revise this quarter (the others will come later) – which are Provenance, Open Source, and Secure Equipment Delivery – drop an email to Tom Hofstetter of NERC at tom.hofstetter@nerc.net and he’ll send you the links for those Doodle polls.

Thursday, January 13, 2022

The CycloneDX SBOM format, now with VEX appeal!


On Sunday, I put up a post in which I included this passage:

…CycloneDX isn’t standing still, either. That project will soon announce (I think imminently, but definitely by the end of January) its new v1.4, which will significantly enhance the vulnerability management capabilities in v1.3 – and I thought those were already pretty good. While I haven’t seen any specs for v1.4 yet, I know that one addition will be a VEX-like capability.

Even though I used the word “imminently”, I was still surprised when I received an email announcement on Wednesday from Patrick Dwyer, co-leader of the OWASP CycloneDX SBOM format project (with Steve Springett, who has the dubious distinction of being mentioned in four of my last five posts, including this one), announcing that CDX 1.4 is now available.

While Patrick’s email listed four or five new capabilities in CDX 1.4, the one he promoted the most was the fact that it includes a VEX capability, which is of course something that I’ve written a lot about. Patrick also mentioned the new SaaSBOM capability, which I think begins to address the biggest new challenge for the CISA SBOM community – properly identifying components of cloud software, as well as cloud-based components of locally-installed software (when the CISA-sponsored meetings start up in February, SaaS - i.e. cloud-based – components will be the entire focus of one of the four working groups, although there are also old challenges that still need to be addressed as well. Hopefully, they will be addressed by the other working groups. BTW, anyone is welcome to join those working groups. Drop an email to sbom@cisa.dhs.gov).

But since VEX is something I know about, and since Patrick made it the big focus of his announcement, I’d like to ask – and answer – the question, “Will the CDX 1.4 implementation of the VEX concept be very useful to organizations trying to manage risks that arise from components found in software they use?”

First I want to describe how I view the VEX concept:

1.      The idea for VEX arose a few years ago in discussions among participants in the NTIA Software Component Transparency Initiative, which is now under CISA. The participants realized that the majority – and perhaps the great majority, like 90% or more – of vulnerabilities that can be identified in software components (i.e. by looking up the component’s CPE name in the NVD) are in fact not exploitable in the product itself. In other words, if an attacker looks up a particular software product’s component vulnerabilities in the NVD and then tries to hack into the product by using them one-by-one, probably at most he’ll succeed one in ten times.

2.      That’s great news, right? Unfortunately, this is a good news/bad news situation, with the latter being more important. The flip side of the fact that at least 90% of software component vulnerabilities aren’t exploitable in the product itself is that, when the customer of a product discovers a component vulnerability in the NVD and contacts their supplier to ask when it will be patched, in at least nine cases out of ten, they will hear that no patch is needed, since the vulnerability isn’t exploitable in the product itself, often because of how the vulnerable component was implemented in the product. This will be a source of much frustration to customers and it will cause a lot of unneeded expense for software developers, since they’ll have to tie up some expensive software developers, explaining to hundreds of callers every day why they don’t have to worry about a particular vulnerability, even though it’s technically present in one component of their product.

3.      Because of this, many large suppliers (including at least a few who are actively participating in the SBOM Initiative) are holding back from releasing SBOMs to their customers, for fear their support lines will be overwhelmed with literally thousands of “false positive” calls like this. In my opinion, in order for those suppliers to start distributing SBOMs, they will need to be assured that there’s some way for them to get the word out to users, when a component vulnerability isn’t exploitable. One large supplier said in a VEX meeting that they thought they would get 2,000 false positive calls to their help lines, if they started distributing SBOMs without also distributing VEXes.

4.      The solution that was decided on a few years ago (and work started on the solution in the summer of 2020) was a document that would state that e.g. CVE-2022-12345 is not exploitable in product X version 2.4. Like the SBOM itself, the VEX would be machine-readable, and it would be distributed through the same channels as the SBOM – since in many if not most organizations, the staff members who need to see SBOMs will also need to see VEXes.

However, I use the word “see” loosely. SBOMs, and especially VEXes, were never meant to be human-readable, although even the explicitly machine-readable versions can be read by humans, with some understanding of what they say. But, given that some software products can contain literally thousands of components, there’s simply no way that machine readability can ever be optional. No non-machine-readable format will ever scale.

Now let’s discuss use cases for VEX, although we need to start with SBOMs. By far the largest use case for SBOMs is vulnerability management. Specifically, the customer wants to learn, for every component listed in an SBOM, what vulnerabilities are currently listed in the NVD for it. To do this, the customer needs a tool like Dependency Track, which I discussed in this post. D-T ingests a CycloneDX SBOM and identifies all vulnerabilities (CVEs) listed in the NVD for each component of the product. Moreover, it will keep updating that list as often, and for as long a period, as desired by the user.

That’s good, but the fact is that at least 90% of the vulnerabilities identified by D-T won’t actually be exploitable in the product itself. That’s where VEXes come in, but D-T doesn’t today ingest VEXes; and since the VEX format is about three months away from being finalized (even though some of us had thought it was finalized last summer), neither D-T nor any other tool can ingest it now. In other words, with Dependency Track or similar tools (including software composition analysis tools), you can see a list of component vulnerabilities applicable to a software product. However, you can’t see the much smaller, but much more useful, list of exploitable component vulnerabilities in the product.

Obviously, D-T won’t be able to ingest VEXes until the format is finalized, and I have no idea if VEX ingestion is planned or not.[i] But if we did have a tool that a) ingested SBOMs and VEXes, and b) output a list of exploitable component vulnerabilities in the product (hey, a guy can dream, can’t he?), how would it work? The important point about that tool is it needs to create a database. Even if it the tool just analyzes a single SBOM and identifies vulnerabilities applicable to components found in that SBOM, the fact is that VEXes that apply to that SBOM can appear at any time in the future. In fact, it’s very likely that, when the SBOM for a new product version is first released, there will be few VEXes that are applicable to the specific components that are called out in the SBOM. An SBOM is released at a point in time, whereas VEXes that apply to that SBOM (and sometimes to other SBOMs as well, usually for different versions of the same product) will dribble in over time.

This means that the tool will need to maintain a database that lists different versions of a product, each of which should have its own SBOM, since any change at all in the software requires a new SBOM. For each of those versions, the database should list the components (including version numbers) found in the SBOM, as well as any vulnerabilities found in the NVD for those components. Note that, even though the component vulnerabilities are of interest due to the fact that they apply to a component of the product, the vulnerabilities are “in” the product itself – since there’s in general no way that an attacker could attack just a component, not the whole product. Once the vulnerability has been identified in the NVD and linked to the product that contains the component, there’s no further reason to link the vulnerability with the component.

In other words, while it might be interesting to know which component is “responsible” for a particular vulnerability being present in a product, the only information that’s essential is the list of exploitable vulnerabilities in the product itself, regardless of which component they might apply to (or even whether they apply to any component at all. See items 2-7 in the list of VEX use cases below, none of which even mentions a component).

As I’ve described it so far, the database just lists vulnerabilities applicable to a product, not the much smaller (but much more useful) set of exploitable vulnerabilities. The latter set, a subset of the full vulnerability set, is arrived at by applying information retrieved from VEX documents to the lists. That is, if a VEX says that CVE-2022-12345 isn’t exploitable in Product A, then the tool will remove that CVE from the list. Even better, the tool will flag that CVE as not exploitable and put it in a separate list, but it won’t delete it from the database altogether. I’ll discuss why this is a good idea in a future post.

Thus, the database will maintain, for each version of each product in the database, a list of exploitable component vulnerabilities; that list might change every day (and for a product like the AWS client, which has 5600 components, it might change hourly). In theory, the user will be able to look in the database at any time to find the most up-to-date list of exploitable component vulnerabilities, for each product and version that they’ve stored in the database (which hopefully will correspond to every product and version currently in use in their environment – although that might not be possible for a while).

Now, let’s look at some use cases for VEXes. I’ve been participating in the NTIA/CISA VEX meetings since the second one in September 2020 and I’ve heard lots of discussion of use cases, so I’m not inventing any of these:

1.      The simplest use case is: A VEX states that CVE-2022-12345 is not exploitable in Product A version 2.4, even though it’s listed in the NVD as a vulnerability of  X, and X is a component of A v2.4.[ii]

2.      A VEX states that CVE-2022-12345 is not exploitable in versions 2.2 through 2.4 of X.

3.      A VEX states that CVE-2022-12345 is not exploitable in any current version of a family of products made by a particular manufacturer (e.g. routers made by Cisco™).

4.      A VEX states that CVE-2022-12345 is not exploitable in any current version of any product made by a particular software developer.

5.      A VEX states that CVE-2022-12345 is not exploitable in the current version of a software product, but it was exploitable in all previous versions after v1.5. In other words, the supplier has just patched a vulnerability that had been present in their product for a while, but which they didn’t know about until very recently (this is a very common scenario, BTW, since usually nobody knows about a vulnerability until it’s discovered by some researcher or software developer – or maybe by one of the bad guys, heaven forbid).

6.      A VEX states that CVE-2022-12345 is not exploitable in versions 2.0-2.7, 3.0-3.2, 3.8-4.1, and 5.9 of product X. It should be assumed to be exploitable in all other versions.

7.      A VEX states that none of the collection of vulnerabilities known as Ripple20 is exploitable in any of a supplier’s current product versions.

So here’s the big question: Which of these use cases would the VEX capability in CycloneDX 1.4 be able to address? As far as I can see, it only addresses the first case. I believe the CDX VEX is intended to refer to (or even be part of) a particular SBOM; this means it can refer to only one version of one product.[iii] Since all of the other use cases deal with multiple versions of one product or with multiple products, I don’t think the CDX 1.4 VEX capability can address them.

Note 1/14: After I put this post on LinkedIn, Steve Springett commented that in fact CDX 1.4 can do all of the use cases above. He posted some examples for me on GitHub, and we had a long Zoom call later on. He convinced me that CDX 1.4 can in fact serve all of these use cases. And for the different approach to SBOMs that I discuss in the paragraphs below, I think CDX 1.4 with VEX would be the right format to use.

While it is certainly valuable for CDX 1.4 to offer the capability it does, the VEX concept itself – as outlined by the NTIA in this one-page paper – encompasses all of the above use cases (some people think it should handle even more complex use cases, although I think the cases listed above are complex enough, thank you very much). Suppliers (who will be the primary entities preparing VEXes) will need to have another VEX solution along with CDX 1.4, if they want to address use cases 2-7 above (as well as others not listed).

But there’s one other point: If a supplier wants to convey information about vulnerabilities in their SBOMs (which is only possible in CycloneDX at the moment. SPDX doesn’t now have that capability, although I believe that SPDX 3.0 will, when that version is available), they have to keep in mind that they will need to update their SBOMs much more regularly than if they don’t include vulnerability information.

This is because applicable vulnerabilities change much more rapidly than do product components. A new SBOM should be issued whenever any code in the product has changed, and especially when there has been some change in a component. However, those changes will undoubtedly be far less frequent than changes in the vulnerabilities applicable to the product. As I mentioned above, a product that has thousands of components might literally require a new SBOM every hour, if the SBOM includes information on vulnerabilities. This is because it will need to be updated whenever a new component vulnerability has appeared in, or been removed from, the NVD – as well as in the VEX case, where a component vulnerability that appears in the NVD has been determined not to be exploitable.

Of course, pushing a new SBOM out to customers multiple times in a day, or even daily or every couple of days, will be a big chore, even if the whole process is ultimately fully automated. But there’s another way to make vulnerability information available to customers: the supplier can post SBOMS for all versions of all products online (in a portal with controlled access). Moreover, they can constantly look up component vulnerability information in the NVD and post that information with the SBOMs (whether or not the information is included inside the SBOMs themselves. I would recommend it not be, but I don’t think it will make a lot of difference either way). Finally, they can use VEX information (which of course comes from the supplier anyway) to remove vulnerabilities that aren’t exploitable from the list for each product.

In other words, the supplier can provide a constantly-updated list of exploitable product vulnerabilities online, without having to send a single SBOM or VEX to a customer (the SBOMs and VEXes will of course be available for download, for any customer that wants to check the supplier’s work in identifying vulnerabilities). And if they don’t want to do this themselves, I’m sure there will be third parties that can provide this service to their customers in their place – the supplier will just need to provide the SBOMs and VEXes to the third party.

In this post, I pointed out that it should be the supplier’s responsibility – or that of a third party service provider, either chosen by the supplier or by an end user organization - to develop and maintain a list of exploitable component vulnerabilities for each of their software products (and in every version of every product, at least those that are still being supported). I gave reasons for saying this in that post, but I didn’t emphasize the most important reason: that the lack of a consumer tool that will ingest SBOMs and VEXes and constantly list exploitable vulnerabilities in a product currently serves as a roadblock to SBOM adoption, let alone VEX adoption[iv]. If no tool is needed and this service is provided online by the supplier or a third party, the roadblock disappears.

To get back to CycloneDX 1.4, I think it’s a great product, and for certain lightweight VEX use cases it’s quite appropriate. But I think Patrick and Steve are making a mistake by emphasizing that capability so much. I suspect that 6-12 months from now, the SaaSBOM capability will be seen as the huge innovation at the heart of CDX 1.4, with the VEX capability being a nice-to-have but not the primary reason for users to adopt v1.4. The CDX team might want to imagine the use cases that SaaSBOM opens up and make those the heart of their efforts to promote the new version.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they necessarily shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Since Dependency Track was originally created to help software developers identify vulnerabilities in components (dependencies) that they’re incorporating into their own products, the fact that VEX isn’t available doesn’t pose a big problem. Developers want to ship products that are vulnerability-free, period – whether or not the vulnerability is actually exploitable. Software buyers will form their opinions on a product based on the number of vulnerabilities they see for it in the NVD, and the supplier won’t usually be given the chance to explain non-exploitable vulnerabilities.

[ii] Note that there’s no need for the VEX to refer to a particular component, as we’ll see in a moment. A VEX is simply an assertion of whether or not a particular vulnerability is exploitable in a particular product/version, period.

[iii] I readily admit I could be wrong about this. There are two JSON examples on GitHub, which I’ll admit I only partially understood. So if I am wrong, I hope someone will correct me.

[iv] There are at least two other such roadblocks. All three would be bypassed and cease to be roadblocks, were my proposal to be adopted. 

Sunday, January 9, 2022

Which is the right SBOM format for us?

 

Note from Tom: There may have been a glitch in the email feed that caused some people (including me) not to receive the email containing my most recent post. If you were one of those unfortunate people who didn't receive it, you may want to read it now. On the other hand, you may consider yourself fortunate when you don't receive my posts...I know there are at least a few people that fall into that bucket.

When they first start learning about SBOMs, some people are dismayed by the fact that there are several SBOM formats, and that the NTIA/CISA Software Component Transparency Initiative so far hasn’t anointed one of these as the Chosen One. In fact, the Initiative makes a point of saying that there is no reason for the software world to standardize on one SBOM format.

Moreover, neither Executive Order 14028 (which mandated that all federal agencies start requiring SBOMs from their suppliers. This will be required starting in August 2022), nor the two supporting documents that were mandated by the EO (the NTIA Minimum Elements document and NIST’s implementation guidance for the SBOM provisions, which is due on February 6 and for which draft language was posted in November) even hints that there will ever be a single “standard” SBOM format. I won’t say the day will never come when there’s a single universally accepted format, but I will say that I don’t think it should be imposed, whether by government regulations or even by some broad consensus of organizations that develop software and organizations that primarily use software (which pretty well means every public or private organization in the world).

Nevertheless, there’s a lot that can be said about formats:

1.      While the Initiative officially recognizes three formats - SPDX, CycloneDX and SWID – only SPDX and CycloneDX should be considered true SBOM formats. SWID was developed as a software identifier, and that is its primary use. While it does contain component information including the seven Minimum Elements, it doesn’t offer the richness of the other two formats.

2.      SPDX and CycloneDX differ in their original use cases. SPDX was originally developed (around 12 years ago) for open source license management – a use case that is very important for software developers, but not very important for organizations that do little or no software development. CycloneDX (CDX) was developed in 2017 for purposes of vulnerability management (it was developed for use with Dependency Track, which I discussed in my last post. Both CycloneDX and D-T are OWASP projects; in fact, CDX was designated an OWASP Flagship Project last June. Steve Springett is lead developer for D-T and co-lead for CDX).

3.      Because of their original use cases, SPDX is probably better at license management, while CDX is probably better at vulnerability management. However, one big advantage of having two major formats is that they compete with each other. The current version of SPDX is 2.2 (this version was designated an ISO standard last year). But the next version 3.0 will be a total rewrite, aimed squarely at vulnerability management.

4.      I have been hearing about v3.0 for a while. It sounds good, but since it has been under development for many years, and since I assumed it was going to be hobbled by the need to maintain compatibility with v2.2, I was not expecting it to be available soon (definitely not this year), or to be very useful when it arrived.

5.      So I was pleasantly surprised to hear Bob Martin of MITRE Corporation - who has been very involved with v3.0 development, and who is someone you run into every time you turn a corner in the world of software supply chain risk management – say last week that a) SPDX v3.0 will be available for general use this March, but b) it will not be compatible with v2.2. I interpret the latter to mean that a v2.2 SBOM will not be comprehensible to a tool that reads v3.0 SBOMs (of course, I’m sure there will be a conversion utility between the two formats).

6.      The latter item was good news. Trying to aim for compatibility with v2.2 would have severely limited how well v3.0 could accomplish its purpose of aiding the vulnerability management process. The fact that v3.0 “cuts the cord” to previous versions means it might well be a serious competitor to CDX in the vulnerability management space.

7.      However, CycloneDX isn’t standing still, either. That project will soon announce (I think imminently, but definitely by the end of January) its new v1.4, which will significantly enhance the vulnerability management capabilities in v1.3 – and I thought those were already pretty good. While I haven’t seen any specs for v1.4 yet, I know that one addition will be a VEX-like capability. That might in itself be a very significant development for several reasons, although the practice of using SBOMs for vulnerability management – as currently envisioned – will need to change, in order to take advantage of this capability. But if it’s going to change, now’s the time to do it, before anything is set in stone. The processes for using SBOMs for vulnerability management – for end users who aren’t also significant developers – is somewhere between infancy and toddlerhood.

But speaking of change, as I described recently, I no longer think the primary responsibility for identifying exploitable component vulnerabilities should rest with the end user. It should rest with the supplier, or perhaps with a third party service provider that has been engaged by the supplier or by the user. I have several reasons for saying this, but the most important is: Why should the 10,000 users of a software product all have to perform the exact same set of steps to identify exploitable vulnerabilities – and each acquire a so-far-nonexistent tool to help them do that – when it can be done much more efficiently, and of course at a significantly lower cost per user (in both money and time), by one or a few central organizations (either the supplier or a service provider)? Especially when the supplier should already be doing this work anyway?

In other words, in the long run I believe the biggest use for SBOMs, for software users that aren’t also significant software suppliers, will be checking the supplier’s (or the third party service provider’s) work of identifying exploitable component vulnerabilities. Some organizations will want to do this and will invest in the required tooling and expertise. Other organizations will decide to leave this work to the experts and just focus on the more important part of vulnerability management: given the current lists of exploitable component vulnerabilities in the software products the organization uses, figuring out where and how the vulnerabilities can affect the organization, and taking mitigation steps (primarily, bugging the developers to patch the serious vulnerabilities ASAP) based on the degree of risk posed to the organization by each vulnerability.

Even in the ideal world I’m outlining, I think most users will still want to receive SBOMs and VEXes from their software suppliers, even though they won’t bear the primarily responsibility for identifying component vulnerabilities. But the decision which SBOM format to request from suppliers (or even whether to ask for a specific format at all, as long as the supplier uses either SPDX or CycloneDX) won’t be quite as crucial as I had thought previously.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. 

Wednesday, January 5, 2022

“Weaponized” coffee makers?


This morning, I had a conversation with Tim Walsh of the Mayo Clinic, one of my “TEAMS friends” from the SBOM community (Tim was one of the original members of the Software Component Transparency Initiative and is still quite active in the group).

He mentioned that the Mayo Clinic has several hundred thousand devices on their network (the average hospital room has 13 networked devices), and he would love to have SBOMs for all of them. But he’s not just talking about infusion pumps and heart monitors. He’s talking about coffee makers and vending machines, since they’re all network connected now. He pointed out that a huge percentage of devices have at least some possibility of code written in Java, and that those are all very likely to have log4j, since that’s the fastest and easiest way to implement logging in that environment (it’s also free, of course).

So even though a hack of an infusion pump poses a direct threat to patient safety and a hack of a coffee maker doesn’t, hacking into a coffee maker can give the attacker a platform from which to attack lots of other devices that can affect patient safety (or the financial health of the hospital, of course). I don’t know if Tim has started patching the coffee makers yet, but he’s at least going to watch them for signs of compromise.

Of course, widespread availability of SBOMs would have saved a lot of IT people a lot of time since log4j came out. Since they’re not widely available now, this means that users have to hear from the software suppliers about whether or not they have any log4j in their product. And Tim pointed out that Mayo didn’t wait to hear from the suppliers; they’re patching all of the devices now, on the assumption that they’re all affected until proven otherwise.

This means that what’s just as important as a supplier notification that their product is affected by log4j is a notification that their product isn’t affected by it – so people like Tim don’t have to preemptively patch it. This is why I was quite pleased to see the notification today from Fortress Information Security that their three main software products aren’t affected by either of the log4j vulnerabilities.

Not only did they provide this notification in a blog post, but they also attached a VEX for each of the products. While these won’t do you any direct good at the moment, since tools aren’t available to read the VEX format (and the format is still being worked out), it’s interesting to see them, nonetheless.

I certainly recommend that other software suppliers provide a similar notification to their customers for every product of theirs that isn’t affected by log4j. You’ll be saving them a lot of unnecessary work. In fact, you might go further than that and point out the module that’s affected (log4j-core). So if that module isn’t present in the product, they don’t need to worry.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they necessarily shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, December 30, 2021

Who should be responsible for software component vulnerability management?


I had a Road to Damascus-type incident recently, except that, unlike in the original incident, I wasn’t blinded and I didn’t fall off my horse.

What led to my incident? I’ve become increasingly concerned of late about the prospects for consumption of software bills of materials (affectionately known as SBOMs). I’m not worried about production: software suppliers are already producing lots of SBOMs for their products and reaping a lot of benefits from doing so. But those benefits are strictly internal; few suppliers are distributing their SBOMs to their customers, and close to none are doing it with any regularity (in general, a new SBOM should be released whenever there has been any change at all in a software product).

Given that they’re producing lots of SBOMs, why aren’t suppliers distributing them to their customers? Is it because they’re all meanies and they don’t want to share the benefits of using SBOMs with their customers? No, it’s much simpler than that: They’re not sharing them because the customers aren’t asking for them.

So why aren’t the customers asking for SBOMs? There are two simple reasons:

First, a lot of software-using organizations don’t feel like making the (admittedly substantial) investment in time required to start using SBOMs intelligently, at least currently. The people inside these organizations who would be called on to find some use for the SBOMs (i.e. currently overworked security people, fresh from losing their holiday break to log4j, thank you very much) would have to invest a lot of time learning about SBOMs and trying to think of ways their organizations could use them (since it seems the end users within their organizations aren’t asking for them either).

Currently, anyone who’s starting with no knowledge of SBOMs needs to read about five or six NTIA documents and synthesize the sometimes conflicting statements in them into a single narrative (which even then will include a number of gaps that haven’t been addressed by the group yet). Until either the learning burden diminishes, or someone can do all of this learning for them (and maybe transfer the knowledge to their brains via a head-to-head USB cable – have humans evolved USB ports yet?), these people – and the organizations they work for – aren’t going to be interested in SBOMs.

Second, customers who do see benefit in having SBOMs have done enough reading to know that the two major SBOM formats – SPDX and CycloneDX – are both proudly machine-readable. Yes, you can get them in non-machine-readable formats like XLS, but given that for example the AWS client has 5600 components (as Dick Brooks of Reliable Energy Analytics has pointed out), do you really want to try to deal with all of those in a spreadsheet?

But what happens when this second group of customers looks around for easy-to-use low-cost or open source vulnerability management tools that can ingest SBOMs (and later VEXes, since the two need to go hand in hand)? They don’t find them. I believe the best SBOM consumption tool for vulnerability management purposes is Dependency Track, an open source tool developed under OWASP. It was originally developed in 2012, about five years before the term SBOM started being widely used in the software community.[i]

Dependency Track does all the basics required for software component vulnerability management and is widely used by developers. It just requires that a CyloneDX SBOM be fed into it (or it will create one from the source code). Then it will (among other tasks) identify all vulnerabilities (CVEs) in the NVD that apply to components and update this list as often as required. It does suffer from the limitation of not being able to ingest VEXes – but VEX is so new (and still undergoing modification) that no other product currently supports this format, either.

But since Dependency Track is an open source tool that requires more user involvement and knowledge than just pushing a Download button and then hitting Yes or Next a few times, there will always be a lot of users who won’t want to get involved with it. This despite the fact that IMO it’s as of now the only show in town (but that being said, I think there could be a lot more non-developers who would start using D-T for component vulnerability management purposes, if they were informed about easy it is to use the tool once it’s installed. Steve says there are a large number of these non-developer users now; in fact, OWASP may sponsor a webinar soon, focused on exactly this use case. If and when that happens, I’ll be sure to let you know).

To sum up what I’ve said so far, I don’t see demand for SBOMs jumping significantly until two things happen. First, there needs to be a single document that walks a technically-oriented reader, who has no previous knowledge of or experience with software development or SBOMs, through the entire process of using SBOMs for vulnerability management purposes.

Second, there needs to be one of these two items (and hopefully both):

·        An easy-to-install open source (or low cost) tool that at a minimum a) ingests an SBOM for a product and extracts component names - hopefully in CPE format, b) regularly (preferably daily) searches the NVD for vulnerabilities applicable to components identified in the SBOM, and c) removes from that list any vulnerability that has been identified by the supplier of the product as not exploitable in the product itself (the latter information may someday be communicated in a VEX document, but it might be communicated in other ways as well).

·        A third-party service that processes SBOMs and VEX information and provides to the customer the same list of exploitable component vulnerabilities provided by the hypothetical tool described above. Of course, since there are other sources of component risk besides just vulnerabilities listed in the NVD (such as the “Nebraska problem” and three others, mentioned in this post), the service will probably address these risks as well. It might also address risks due to vulnerabilities not listed in the NVD, but identified in other databases or other non-database sources.

Of course, neither of these items is available today – otherwise I wouldn’t have written this post. I’m sure at least the service will be available by the August deadline for federal agencies to start requesting SBOMs from their software suppliers. I’m not sure about the tool, mainly because VEX isn’t finalized yet, and SBOMs without VEX information just aren’t going to be very useful. A third party service provider would hopefully be able to get VEX-type information directly from the suppliers, whether or not VEX is finalized – since the supplier should be quite interested in getting the word out about unexploitable component vulnerabilities, and the supplier and the third party can easily work out their own format for communicating VEX-type information.

So is what I’ve just described my Road to Damascus moment? No, this is something I’ve come to realize over the last four months. My RtD moment occurred when I asked the question – prodded in part by a suggestion from a friend and client who I won’t name, since I haven’t had a chance to ask his permission for this – why software users should have to bear the responsibility for identifying exploitable software component vulnerabilities. I now think the software suppliers should bear that responsibility. I have three reasons for saying this:

1.      Currently, you usually learn about vulnerabilities in a software product that you operate from the supplier of the product. You receive an emailed notice directly from the supplier, or at least the supplier reports the vulnerability to the NVD, where you discover the vulnerability (hopefully, both will happen). But a lot of exploitable component vulnerabilities aren’t currently reported by suppliers, using either method. What is it that makes component vulnerabilities different from vulnerabilities identified in the supplier’s own code, other than the fact that you won’t normally be able to find out about component vulnerabilities without…envelope, please…an SBOM? None that I know of.

2.      Does it really make sense to say to each customer of a software product, “You’re responsible for finding component vulnerabilities in Product A and maintaining those lists day in and day out”? This even though the supplier is already gathering this information (or at least they should be)? If the suppliers provide this information to their customers, the latter will only need SBOMs and VEXes (and a tool or service to process them) as a way of checking to make sure the supplier hasn’t left any exploitable component vulnerabilities off the list (of course, this assumes that all suppliers immediately accept this responsibility. Nice idea, but ain’t gonna happen). But the users won’t bear the responsibility for learning about the vulnerabilities in the first place.

3.      The third reason makes a lot of sense to me from an economic point of view: The party that introduces a risk for their own benefit should bear the burden of remediating that risk. As everyone knows, developers’ use of third-party components (both open source and proprietary, but mostly the former) has ballooned in recent years, but so has component risk. Log4j, Ripple20, Heartbleed, Apache Struts/Equifax and other disasters have been the result. In fact, having the suppliers be responsible for identifying and tracking component vulnerabilities isn’t a big increase in their current responsibilities, since they should already be doing the hard part now – patching those component vulnerabilities after they identify them and determine they’re exploitable (which, fortunately for the suppliers, is usually not the case).

But it’s not like the users won’t have to do anything at all about component vulnerabilities. They’ll still need to track down and identify – using configuration and vulnerability management tools – the vulnerable software on their network and apply the patches for exploitable component vulnerabilities, which will hopefully be quickly forthcoming from the suppliers. But instead of waiting for the vendors of those tools to develop the capability to ingest SBOMs and VEXes in order to obtain information on component vulnerabilities (something it seems not a single major vendor has done so far), the supplier of the software (or again, a third party they’ve engaged for this work) could provide a feed that follows the vendor’s API – meaning the vendor will have to expend exactly zero effort in order to become “SBOM-ready”. This alone is a huge benefit to users, since waiting for the tool vendors to become SBOM-ready seems like Waiting for Godot: He can’t come today, but for sure he’ll come tomorrow…

And there’s another huge advantage to the idea of making the suppliers responsible for component vulnerability management: the need for VEXes largely goes away. This is for a simple reason: VEX was designed primarily for a supplier to communicate to customers – who have SBOMs and are looking up component vulnerabilities in the NVD – that certain component vulnerabilities aren’t exploitable in the product itself. The user needs VEXes, because if they start looking in the product for every component vulnerability listed in the NVD, some huge percentage of the time they spend doing this (as well as the time they spend calling their supplier about component vulnerabilities they find in the NVD) will be wasted. The customer needs the VEX in order to winnow the list of component vulnerabilities down to only the small percentage that are exploitable.

But if the supplier itself is responsible for producing the final list of exploitable vulnerabilities, the communication that would otherwise require a VEX would be completely internal: Whoever determines whether a vulnerability is exploitable or not would send an email to the person who is responsible for the final list of exploitable vulnerabilities for the product (in fact, they may be the same person). The email would say something like, “Remove CVE-2021-12345 from the list of exploitable vulnerabilities in Product X. Even though the NVD shows that vulnerability is found in Component A, and even though Component A is included in X, CVE-2021-12345 isn’t exploitable in X. This is because A is a library, and we never included the vulnerable module in X in the first place.”[ii]

There will still be some need for VEXes, including what may become the main need for SBOMs – to make sure the supplier is properly identifying exploitable component vulnerabilities in their products. But VEXes won’t be a gating factor for the use of SBOMs at all, which is what they are today.

My unofficial slogan is “Often wrong, but never in doubt.” I have to admit that it seems too good to be true, that having the supplier take responsibility for component vulnerability management would be a win-win in so many ways. If you think I’m full of ____, please let me know. But I will interpret your silence to indicate total agreement (unless you fell asleep before you read this).

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] The leader of the Dependency Track project is the same person who leads the CycloneDX project, also under OWASP: Steve Springett, an active member of the NTIA-but-now-CISA Software Component Transparency Initiative (and someone who lives about ten miles from me in suburban Chicago. I suggested to Steve that we meet for lunch recently, but he wanted to wait until we can eat outside. I’ve been looking for restaurants in Chicago that have outdoor seating in January, but it’s hard to find any, for some reason). 

[ii] This is the situation described with respect to the Log4j vulnerability in this post.