Saturday, January 29, 2022

IoT Cyber Regulation in the US and Europe


On behalf of my client Red Alert Labs, I have been participating in meetings of the Industry IoT Consortium (formerly known as the Industrial Internet Consortium). Since Red Alert Labs specializes in IoT security regulation, Roland Atoui, the president of RAL, and I volunteered to present on the state of IoT cyber regulation in the EU and the US, at meetings of the IIC Security Working Group. Both of our presentations are available on YouTube.

While there are no current cyber regulations for IoT devices in the US, Executive Order 14028 last May required NIST to develop a “device labeling program” for “consumer” IoT devices, which I discussed. I did my presentation in early December, a couple weeks before NIST released preliminary guidelines for the labeling program (their final guidelines are due on February 6, along with their final guidelines for SBOMs and other items addressed in Section 4(e) of the EO). You can view it here (my presentation runs for about 20 minutes, which was followed by another 20 minutes of really good Q&A).

Since NIST hadn’t given any clue to what they were considering for the program at the time, I was optimistic that it would be what I considered a good program – risk-based and mostly focused on educating the consumer about steps they could take in order to mitigate the risks, rather than giving an up-or-down judgment on whether the device was “secure” or not. My presentation reflects that opinion.

However, when NIST came out with their preliminary guidelines for the IoT device labeling program, I was quite disappointed. They wanted to have a risk-based program, but they also wanted to have an up-or-down label. You can’t have it both ways. Fortunately, those were just preliminary guidelines, and I’m optimistic that what NIST comes out with on the 6th will be better.

Roland presented early in January, along with Isaac Dangana of Red Alert. They discussed the European Cybersecurity Act, which came into effect in 2019. This is meant to be the governance framework for all cyber regulation in Europe, although there are a few regulatory schemes in countries like Germany now (as well as in particular industries like 5G); these will all be replaced as equivalent European schemes come into place.

What I find really interesting about the Act is that it doesn’t in itself regulate anything; rather it provides a governance framework for certification schemes that address particular areas of cybersecurity. The schemes are developed by ENISA, the EU Agency for Cyber Security. Two schemes have already been developed: for the Common Criteria and for the cloud. Red Alert was engaged by ENISA to develop guidelines for both of these schemes. Roland expects that ENISA will start work – with industry, of course – on an IoT scheme this year.

To be honest, I never realized there was such a difference between cyber regulation in the US and the EU. I’m not sure how I’d characterize that difference at this point; and since the situation is changing rapidly on both sides of the ocean, I think I’ll wait a while before I form an opinion about who has the better ideas for cyber regulation. Although maybe I’ll decide they both have their strong points and weak points, which is most likely the case.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they necessarily shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, January 27, 2022

I hate to be a pest, but…

On Friday the 28th, I’ll be Chris Blask’s guest on his interview show at 2PM EST. Chris is quite an interesting guy, while I’m a relentless scold (see below). So it should be interesting. I don’t know what we’ll talk about, but I think it might have something to do with SBOMs. But knowing Chris, it might have something to do with boats. Or maybe both. If you can’t make it on the 28th, it will be available on YouTube next week; I’ll publish the link when I get it. 

Perhaps you’ve read something about how Vladimir Putin, my favorite dictator/kleptocrat/cybercriminal, is now threatening the Ukraine with invasion – although it seems he forgot to bring more than half of the army he will need to conduct a successful invasion. On the other hand, maybe he’s emulating George W Bush, who forced Army Chief of Staff Erik Shinseki to retire in 2003, after he predicted that “several hundred thousand troops” would be needed to pacify Iraq if we invaded. Bush invaded with about half that number.

That move didn’t work out very well, so for that reason I think the Ukrainians can sleep fairly peacefully in their beds, knowing that Putin doesn’t intend to invade with the 100,000 troops he’s arrayed now. From the ruthless giant that I (and everyone else in the US, it seems) believed Russia to be up until the Soviet Union fell, Russia has now become The Mouse that Roared. Plus, he’s made it clear that he won’t miss the opening of the Winter Olympics in Beijing in two weeks – hardly a sign that the tanks will be rolling anytime soon.

But just because he won’t invade doesn’t mean that Putin won’t cause a lot of trouble for Europe and the US, using his favorite “hybrid warfare” tactic: hard-hitting cyberattacks, with the power grid being the favorite target. So it might be expected that he’ll turn his attention back to the grid he loves to attack over all others – yes, even over Ukraine’s: that’s the US grid.

Fortunately for Uncle Vlad, he’s been diligently seeding the US grid with the malware he knows will come in handy on a rainy day – and that day may well be coming very soon. How do I know he’s planted this malware? Consider the people who have been saying that:

1.      The directors of the FBI and CIA, in their Worldwide Threat Assessment in January 2019.

2.      Vikram Thakur of Symantec, in the Wall Street Journal in January 2019.

3.      The former deputy director of the NSA, in May 2019.

4.      The WSJ in November 2019.

With all these people waving a red flag, what has been done to investigate these reports of the Russians planting malware in our grid (and likely in control centers, since they were said to be in a position to cause outages)? After all, when the Russians attacked Ukraine’s grid in 2015 and 2016, US investigators were as thick as flies over there – and they came back and gave a whole series of classified and unclassified briefings in cities across the US. Wouldn’t you expect that there would have been a similar investigation here, along with briefings for utilities, to tell them how to remove the malware? After all, isn’t the US grid much more important to us than Ukraine’s?

One would think so. But nothing ever happened. No briefings, classified or unclassified. No high level reports. No red alerts to the industry. No Facebook posts. No ads on milk cartons. Nothing.

So I have to assume either that all of the above people are boldface liars, or the Russian malware is still sitting in those control centers, waiting for the Dark Lord in his Dark Tower in Moscow to raise his hand…

Have a good night! And make sure your flashlight has batteries.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they necessarily shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Friday, January 21, 2022

Let’s not replicate NERC CIP’s problems for pipelines.


I’m thinking of renaming this “Mariam Baksh’s Blog”, since once again the NextGov reporter has written a really interesting story on cyber goings-on in Washington (my last post was based on one of her articles, and I’ve written a few others before that). Mariam isn’t the kind of reporter whose goal is to write the best story about something that everybody else is also writing about. Instead, she is constantly digging up interesting stories where I would have never even dreamed to look.

The story this time is an outgrowth of the Colonial Pipeline attack. Once the TSA issued a cyber directive to the pipeline industry a few months after the attack, most reporters assumed the problems were solved. After all, you solve cyber problems with regulations, right?

I might have assumed the same thing, except a kind person sent me a redacted[i] copy of the directive, which I wrote about in this post. What did I think of the directive? Well, I thought the font it was written in was attractive, but it was all downhill from there. However, I have found a great use for the directive: In February, I’ve been invited to speak on cyber regulation for a seminar at Case Western Reserve University. At first, I was going to talk about lessons learned from various compliance regimes: NERC CIP (which I know most about), PCI, HIPAA, CMMC, etc. But after reviewing my post on the TSA directive, I realized I hit the gold mine in that one: Just about everything the TSA could have done wrong, they did. They should stick to telling people to take their shoes off in airports. It’s the perfect pedagogical example of how not to develop cybersecurity regulations!

And maybe they will, since Mariam’s article points out that Rep. Bobby Rush[ii] of Illinois has introduced legislation that would take the job of cyber regulation for pipelines away from the TSA and vest it in a new “Energy Product Reliability Organization”. This would be modeled on the North American Electric Reliability Corporation (NERC), which develops and audits reliability standards for the North American electric power industry, under the supervision of the Federal Energy Regulatory Commission (FERC). NERC is referred to as the “Electric Reliability Organization” in the Electric Power Act of 2005, which set up this unusual regulatory structure.

The best known of NERC’s standards are the 12 cybersecurity standards in the CIP family. And clearly, these standards have a good reputation on Capitol Hill. This was reinforced by FERC Chairman Richard Glick’s testimony at a hearing on Wednesday, when he was asked by Rep. Frank Pallone, “…do you think that the industry led stakeholder process established by Chairman Rush's legislation would likewise be a successful mechanism for protecting the reliability of the oil and gas infrastructure?” Glick replied, “I believe so. The electricity model has worked very well … and I believe a similar model will work with pipeline reliability.”

I don’t deny that the NERC CIP standards have made the North American electric grid much more secure than it would be without the standards. On the other hand, there are some serious problems with the CIP compliance regime, which I wouldn’t want to see replicated for the pipeline industry:

1.      The standards should be risk-based, like CIP-013, CIP-012, CIP-003 R2, CIP-010 R4 and CIP-011 R1. This means they should not prescribe particular actions. Instead, they should require the entity to develop a plan to manage the risks posed by a certain set of threats - e.g. supply chain cyber threats or threats due to software vulnerabilities. Then the entity needs to implement that plan. In drawing up the plan, it should be up to the entity to decide the best way to manage the risks, but there will be a single set of approved guidelines for what should be in the plan (something that is missing with CIP-013). Prescriptive requirements, like CIP-007 R2 and CIP-010 R1, are tremendously expensive to comply with, relative to the risk that is mitigated. Explicitly risk-based requirements are much more efficient, and are probably more effective, since the entity doesn’t have to spend so much money and time on activities that do very little to improve security.

2.      Auditing should be based on how well the plan followed the guidelines. Of course, this isn’t the up-or-down, black-or-white criterion that some people (including some NERC CIP auditors, although I believe that sort of thinking is disappearing, thank goodness) think should be the basis for all auditing. If an entity has missed something in the guidelines, but it seems to be an honest mistake, the auditor should work with them to correct the problem (in fact, the auditor should work with them in advance to make sure the plan is good to begin with. This is currently not officially allowed under NERC, due to the supposed risk to “auditor independence”, a term that’s found nowhere in the NERC Rules of Procedure or GAGAS).

3.      In other words, auditors should be partners. They actually are partners nowadays, but when they do this, they’re officially violating the rules – and note that they’ll still never write down any compliance advice they may give and they’ll always say it’s their personal opinion - meaning you can’t count on the next auditor saying the same thing).

4.      Auditing should also be based on how well the plan was implemented. This is where I think auditing actually should be black & white. Once the entity has created the plan, they need to follow it. If they decide something needs to be changed, they should change the plan and document why they made the change. But they shouldn’t just deviate from the plan as it’s currently written (I believe this is how CIP-013 is audited, as far as I know. The entity can make a change to the plan whenever they want, but they need to document the change, and then follow it).

5.      Identification of new risks to address in the standards needs to be divorced from the standards development process. When a new area of risk is identified as important, entities should immediately be required to develop a plan to mitigate those risks and follow that plan – this shouldn’t wait literally years for a new standard to be developed and implemented.

6.      NERC standards development proceeds in geologic time – and because of that, a number of important areas of cyber risk have never been addressed by CIP, since nobody wants to go through the process of developing a new standard. For example, where are the CIP standards that address ransomware, phishing, and APTs? These risks have been around for at least a decade, yet a new standard has never even been proposed for any of them, let alone written. And how long does it take for a new standard to appear after the risk first appears? The risk caused by use of “visiting” laptops on corporate networks has been well known since the late 1990s. When did a requirement for “Transient Cyber Assets” take affect? 2017.

7.      There needs to be a representative body – with representatives from industry stakeholders, the regulators, and perhaps even Congress and the general public – that meets maybe twice a year to identify important new risks that have arisen, as well as to identify risks that are no longer serious. If the body decides a new risk needs to be addressed, a new standard should be created, the mechanics of which would be exactly the same as in the other standards. Only the name of the risk and the guidelines for the plan would differ from one standard to another. So no new requirements need to be developed.

8.      The unit of compliance for the standards shouldn’t be a device (physical or virtual) – specifically, a Cyber Asset, as it is in the current CIP standards. Instead, it should be a system. As I have pointed out several times before, putting BES Cyber Systems in the cloud (e.g. outsourced SCADA) is now impossible, since doing so would put the entity in violation of twenty or so CIP requirements – simply because they would never be able to furnish the evidence required for compliance. Basing the standards on systems (and a BCS is defined just as an aggregation of devices, nothing more) would allow cloud-based systems, containers, and more to be in scope for CIP.

So I wish the new Energy Product Reliability Organization good luck. I think having stakeholder involvement is crucial to having successful cyber standards for critical infrastructure. But don’t spoil it all by not taking account of the lessons learned from the NERC CIP experience.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they necessarily shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Making the directive secret was ridiculous, and almost by itself guaranteed it would fail, as I discussed in my post on the directive. But just for good measure (and to make sure that there was no possibility at all that the directive would succeed), the TSA violated just about every other principle of good cybersecurity standards development. They're nothing if not thorough!

[ii] Rush just recently announced his retirement, after 30 years representing Chicago’s 1st Congressional District in the House. He served well and happens to be the only person who ever beat Barack Obama in an election (Obama challenged him for his House seat in 2000). He was a co-founder (with Fred Hampton) of the Illinois branch of the Black Panther party in 1968.

Tuesday, January 18, 2022

Gee, thanks Vlad! You’re such a swell guy…


Mariam Baksh of NextGov published – as usual – a very interesting article on Jan. 16, which begins with this paragraph:

A senior administration official put questionable timing aside and commended the Kremlin’s arrest Friday of individuals Russian officials say comprise the notorious REvil ransomware group, which U.S. officials have attributed to attacks on critical infrastructure.  

“Questionable timing” indeed! Putin is poised with a knife to the Ukraine’s throat and threatening to send troops to Venezuela and Cuba to threaten the US – so of course this is a great time to thank him for his noble efforts against REvil.

Let me suggest that the real question is this: Seven months ago, Biden - after the Kaseya attacks, which were instigated by REvil) – said (quoting from WaPo) “Putin must put an immediate stop to this activity, or Biden’s administration will take ‘any necessary action’ to stop it.” Why is the administration now taking credit for the fact that Putin finally acted, when Putin’s people certainly knew all along who needed to be arrested (because the Russian intelligence services collaborate with those people all the time, and the US intelligence services had given them a list of names)?

And why, after calling for an immediate stop to “this activity” (which, in case you hadn’t noticed, didn’t bring Russian ransomware activity to a hard stop last July), didn’t Biden keep the pressure on Putin all this time? And given that Putin obviously didn’t pay any attention at all to Biden’s order last July, why doesn’t this “senior administration official” even think, “Hey, the fact that he’s finally arresting the REvil guys now is probably not because he’s been listening to us. It’s because he wants to look as good as he can in other areas, while he’s issuing a new ultimatum to Biden to abandon the Ukraine to him”?

Perhaps the senior official is Secretary of State Blinken, who in September asserted that “no one in the U.S. government expected the Afghan government to fall as quickly as it did.” Of course, there was no way the administration could possibly have known that the Taliban wouldn’t keep their promise of a cease-fire. After all, the Taliban are honorable men. Why Blinken wasn’t fired after that debacle would be a mystery to me, were it not clear that there are lots of others in the administration who also think their job is to claim credit for successes, rather than actually be successful.

Of course, the senior administration official’s comments really aren’t about Putin at all. They’re a vain attempt to at least get some good news out about the administration, since lately all the news has been bad. But frankly, the fact that this clown – whoever he or she is – is trying to turn the fact that Putin completely stiffed Biden for seven months and is now doing what Biden ordered only because he’s trying to divert attention from a much bigger transgression he wants to commit, only shows how weak and clueless they are. And unfortunately, that’s not news.

A much better thing to say – and not through an anonymous spokesperson – would have been “We wish to ‘congratulate’ Mr. Putin on finally taking an itty bitty step to combat one of the many evils Russia has inflicted on the world in recent years. Now here are some more steps Mr. Putin must take, and the consequences that will follow if he doesn’t (BTW, this time we mean it about the consequences):

1.      Adequately compensate the families of victims and governments for their losses in the shooting down of flight MH17 in 2014 or face a ban on all Russian aircraft in international airspace.

2.      Compensate Maersk and the other companies worldwide that lost an estimated $10 billion in the NotPetya attack, or risk being cut off from the SWIFT international funds transfer system.

3.      Compensate the victims (especially government agencies) of the SolarWinds and Kaseya attacks and arrest the perpetrators of both of these (who are either Russian government employees or well known to them) or face an order to US financial institutions and citizens to divest themselves of their Russian bonds and not own them in the future.

And speaking of Russian attacks, here’s another idea: Why don’t we investigate the assertions made by the CIA and FBI in the last “annual” Worldwide Threat Assessment in 2019, to the effect that the Russians had penetrated the US power grid and could cause outages at any time? There’s never even been an investigation of those statements. And there haven’t been any more WTAs since that one.

I guess that’s one way to solve the problem of bad press.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they necessarily shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, January 16, 2022

A great opportunity


Do you remember the NERC Supply Chain Working Group (SCWG)? We’ve been in operation since 2018 and are still going strong. Our current project is to update the set of supply chain guidelines that we drew up in 2019. Drafting them was a great experience, since we had a lot of people participating on the calls. I anticipate that re-drafting them (or perhaps adding to them, since my guess is most of them don’t need to be amended very much) will be equally interesting.

One important goal of the guidelines is to keep them short enough to be read in maybe 20 minutes. The original guidelines were all 3-5 pages long, and I anticipate the new ones will be 5-8 pages. I’ll point out something you may already know: It’s a lot harder to write a short paper than a long one. That may be why the drafting meetings were so interesting – when every word has to count, you need to decide what has to be said and say it as economically as possible (which of course is why blog posts sometimes go on and on – since the blogger knows there’s no limit imposed and he doesn’t have to be careful about his words. Of course, I don’t personally know any bloggers like that, but I’m told they’re out there).

I ended up leading the drafting of two of those papers, and I’ll be leading it for the new versions of both papers (unless someone else would like to take the lead on one of them and I’ll just participate. That would be fine with me). We need to get the drafts done by I believe mid-March, so we won’t have many meetings to draft them. I will have 3-4 meetings for each paper, and I’ll alternate weeks, so there will be a meting for one paper the first week and the other the second, etc. Of course, you can come to as many meetings as you want – although of course there won’t be any recordings made.

The two papers for which I’ll lead the drafting are both found here, along with some slides from presentations we did in Orlando in June 2019. There are also recordings of webinars we did (one for each paper) in 2020, which were well attended (since people weren’t going to a lot of in-person meetings in April through June of 2020). My two papers are Cyber Security Risk Management Lifecycle (which should really be called Supply Chain Cyber Risk Management Lifecyle – we’re not trying to tackle the entire field of cybersecurity in five pages!) and Vendor Risk Management Lifecycle.

You’re welcome to attend any or all of the meetings; I’m not going to keep attendance. You don’t have to be a member of the SCWG, although we’ll probably enroll you anyway. This will entitle you to all the benefits and emoluments of membership - priceless. We’re even waiving the normal $1,000 signup fee…😊

Also, even though these meetings are mostly populated with electric power industry types, I can assure you there’s nothing we’ll be talking about that’s specific to the power industry. So anyone is welcome to participate, both suppliers and end users. Note that, even though the papers are NERC publications, they aren't compliance guidance for CIP-013; they're simply best practices for supply chain cyber risk management.

We’ve put out Doodle polls to find the best time for both series of meetings. The poll for the Cyber Risk Management meetings is here and for Vendor Risk Management is here.[i]

I hate to pressure you, but for the Cyber Risk Management meetings, we’ll have to decide the time by tomorrow afternoon, since we want to have the first meeting this week. So if you’re interested in that, please sign up asap (note we won’t meet on Monday the 17th, although if Monday is the best for someone, we could later move the meeting to Monday if the rest agree). For Vendor Risk Management, we’ll meet next week (the week of the 24th), so we’ll wait a few days before we set that time. We’ll send everybody who’s participated in the poll an invitation for the series.

I hope you can help us out!

Need CIP-013 compliance help, either from the NERC entity or the vendor side? I’ve worked with a number of electric utilities on CIP-013 compliance, and I’m currently working with two vendors to the industry. Drop me an email and we can talk!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] If you want to sign up for one of the three other papers that we’re going to revise this quarter (the others will come later) – which are Provenance, Open Source, and Secure Equipment Delivery – drop an email to Tom Hofstetter of NERC at tom.hofstetter@nerc.net and he’ll send you the links for those Doodle polls.

Thursday, January 13, 2022

The CycloneDX SBOM format, now with VEX appeal!


On Sunday, I put up a post in which I included this passage:

…CycloneDX isn’t standing still, either. That project will soon announce (I think imminently, but definitely by the end of January) its new v1.4, which will significantly enhance the vulnerability management capabilities in v1.3 – and I thought those were already pretty good. While I haven’t seen any specs for v1.4 yet, I know that one addition will be a VEX-like capability.

Even though I used the word “imminently”, I was still surprised when I received an email announcement on Wednesday from Patrick Dwyer, co-leader of the OWASP CycloneDX SBOM format project (with Steve Springett, who has the dubious distinction of being mentioned in four of my last five posts, including this one), announcing that CDX 1.4 is now available.

While Patrick’s email listed four or five new capabilities in CDX 1.4, the one he promoted the most was the fact that it includes a VEX capability, which is of course something that I’ve written a lot about. Patrick also mentioned the new SaaSBOM capability, which I think begins to address the biggest new challenge for the CISA SBOM community – properly identifying components of cloud software, as well as cloud-based components of locally-installed software (when the CISA-sponsored meetings start up in February, SaaS - i.e. cloud-based – components will be the entire focus of one of the four working groups, although there are also old challenges that still need to be addressed as well. Hopefully, they will be addressed by the other working groups. BTW, anyone is welcome to join those working groups. Drop an email to sbom@cisa.dhs.gov).

But since VEX is something I know about, and since Patrick made it the big focus of his announcement, I’d like to ask – and answer – the question, “Will the CDX 1.4 implementation of the VEX concept be very useful to organizations trying to manage risks that arise from components found in software they use?”

First I want to describe how I view the VEX concept:

1.      The idea for VEX arose a few years ago in discussions among participants in the NTIA Software Component Transparency Initiative, which is now under CISA. The participants realized that the majority – and perhaps the great majority, like 90% or more – of vulnerabilities that can be identified in software components (i.e. by looking up the component’s CPE name in the NVD) are in fact not exploitable in the product itself. In other words, if an attacker looks up a particular software product’s component vulnerabilities in the NVD and then tries to hack into the product by using them one-by-one, probably at most he’ll succeed one in ten times.

2.      That’s great news, right? Unfortunately, this is a good news/bad news situation, with the latter being more important. The flip side of the fact that at least 90% of software component vulnerabilities aren’t exploitable in the product itself is that, when the customer of a product discovers a component vulnerability in the NVD and contacts their supplier to ask when it will be patched, in at least nine cases out of ten, they will hear that no patch is needed, since the vulnerability isn’t exploitable in the product itself, often because of how the vulnerable component was implemented in the product. This will be a source of much frustration to customers and it will cause a lot of unneeded expense for software developers, since they’ll have to tie up some expensive software developers, explaining to hundreds of callers every day why they don’t have to worry about a particular vulnerability, even though it’s technically present in one component of their product.

3.      Because of this, many large suppliers (including at least a few who are actively participating in the SBOM Initiative) are holding back from releasing SBOMs to their customers, for fear their support lines will be overwhelmed with literally thousands of “false positive” calls like this. In my opinion, in order for those suppliers to start distributing SBOMs, they will need to be assured that there’s some way for them to get the word out to users, when a component vulnerability isn’t exploitable. One large supplier said in a VEX meeting that they thought they would get 2,000 false positive calls to their help lines, if they started distributing SBOMs without also distributing VEXes.

4.      The solution that was decided on a few years ago (and work started on the solution in the summer of 2020) was a document that would state that e.g. CVE-2022-12345 is not exploitable in product X version 2.4. Like the SBOM itself, the VEX would be machine-readable, and it would be distributed through the same channels as the SBOM – since in many if not most organizations, the staff members who need to see SBOMs will also need to see VEXes.

However, I use the word “see” loosely. SBOMs, and especially VEXes, were never meant to be human-readable, although even the explicitly machine-readable versions can be read by humans, with some understanding of what they say. But, given that some software products can contain literally thousands of components, there’s simply no way that machine readability can ever be optional. No non-machine-readable format will ever scale.

Now let’s discuss use cases for VEX, although we need to start with SBOMs. By far the largest use case for SBOMs is vulnerability management. Specifically, the customer wants to learn, for every component listed in an SBOM, what vulnerabilities are currently listed in the NVD for it. To do this, the customer needs a tool like Dependency Track, which I discussed in this post. D-T ingests a CycloneDX SBOM and identifies all vulnerabilities (CVEs) listed in the NVD for each component of the product. Moreover, it will keep updating that list as often, and for as long a period, as desired by the user.

That’s good, but the fact is that at least 90% of the vulnerabilities identified by D-T won’t actually be exploitable in the product itself. That’s where VEXes come in, but D-T doesn’t today ingest VEXes; and since the VEX format is about three months away from being finalized (even though some of us had thought it was finalized last summer), neither D-T nor any other tool can ingest it now. In other words, with Dependency Track or similar tools (including software composition analysis tools), you can see a list of component vulnerabilities applicable to a software product. However, you can’t see the much smaller, but much more useful, list of exploitable component vulnerabilities in the product.

Obviously, D-T won’t be able to ingest VEXes until the format is finalized, and I have no idea if VEX ingestion is planned or not.[i] But if we did have a tool that a) ingested SBOMs and VEXes, and b) output a list of exploitable component vulnerabilities in the product (hey, a guy can dream, can’t he?), how would it work? The important point about that tool is it needs to create a database. Even if it the tool just analyzes a single SBOM and identifies vulnerabilities applicable to components found in that SBOM, the fact is that VEXes that apply to that SBOM can appear at any time in the future. In fact, it’s very likely that, when the SBOM for a new product version is first released, there will be few VEXes that are applicable to the specific components that are called out in the SBOM. An SBOM is released at a point in time, whereas VEXes that apply to that SBOM (and sometimes to other SBOMs as well, usually for different versions of the same product) will dribble in over time.

This means that the tool will need to maintain a database that lists different versions of a product, each of which should have its own SBOM, since any change at all in the software requires a new SBOM. For each of those versions, the database should list the components (including version numbers) found in the SBOM, as well as any vulnerabilities found in the NVD for those components. Note that, even though the component vulnerabilities are of interest due to the fact that they apply to a component of the product, the vulnerabilities are “in” the product itself – since there’s in general no way that an attacker could attack just a component, not the whole product. Once the vulnerability has been identified in the NVD and linked to the product that contains the component, there’s no further reason to link the vulnerability with the component.

In other words, while it might be interesting to know which component is “responsible” for a particular vulnerability being present in a product, the only information that’s essential is the list of exploitable vulnerabilities in the product itself, regardless of which component they might apply to (or even whether they apply to any component at all. See items 2-7 in the list of VEX use cases below, none of which even mentions a component).

As I’ve described it so far, the database just lists vulnerabilities applicable to a product, not the much smaller (but much more useful) set of exploitable vulnerabilities. The latter set, a subset of the full vulnerability set, is arrived at by applying information retrieved from VEX documents to the lists. That is, if a VEX says that CVE-2022-12345 isn’t exploitable in Product A, then the tool will remove that CVE from the list. Even better, the tool will flag that CVE as not exploitable and put it in a separate list, but it won’t delete it from the database altogether. I’ll discuss why this is a good idea in a future post.

Thus, the database will maintain, for each version of each product in the database, a list of exploitable component vulnerabilities; that list might change every day (and for a product like the AWS client, which has 5600 components, it might change hourly). In theory, the user will be able to look in the database at any time to find the most up-to-date list of exploitable component vulnerabilities, for each product and version that they’ve stored in the database (which hopefully will correspond to every product and version currently in use in their environment – although that might not be possible for a while).

Now, let’s look at some use cases for VEXes. I’ve been participating in the NTIA/CISA VEX meetings since the second one in September 2020 and I’ve heard lots of discussion of use cases, so I’m not inventing any of these:

1.      The simplest use case is: A VEX states that CVE-2022-12345 is not exploitable in Product A version 2.4, even though it’s listed in the NVD as a vulnerability of  X, and X is a component of A v2.4.[ii]

2.      A VEX states that CVE-2022-12345 is not exploitable in versions 2.2 through 2.4 of X.

3.      A VEX states that CVE-2022-12345 is not exploitable in any current version of a family of products made by a particular manufacturer (e.g. routers made by Cisco™).

4.      A VEX states that CVE-2022-12345 is not exploitable in any current version of any product made by a particular software developer.

5.      A VEX states that CVE-2022-12345 is not exploitable in the current version of a software product, but it was exploitable in all previous versions after v1.5. In other words, the supplier has just patched a vulnerability that had been present in their product for a while, but which they didn’t know about until very recently (this is a very common scenario, BTW, since usually nobody knows about a vulnerability until it’s discovered by some researcher or software developer – or maybe by one of the bad guys, heaven forbid).

6.      A VEX states that CVE-2022-12345 is not exploitable in versions 2.0-2.7, 3.0-3.2, 3.8-4.1, and 5.9 of product X. It should be assumed to be exploitable in all other versions.

7.      A VEX states that none of the collection of vulnerabilities known as Ripple20 is exploitable in any of a supplier’s current product versions.

So here’s the big question: Which of these use cases would the VEX capability in CycloneDX 1.4 be able to address? As far as I can see, it only addresses the first case. I believe the CDX VEX is intended to refer to (or even be part of) a particular SBOM; this means it can refer to only one version of one product.[iii] Since all of the other use cases deal with multiple versions of one product or with multiple products, I don’t think the CDX 1.4 VEX capability can address them.

Note 1/14: After I put this post on LinkedIn, Steve Springett commented that in fact CDX 1.4 can do all of the use cases above. He posted some examples for me on GitHub, and we had a long Zoom call later on. He convinced me that CDX 1.4 can in fact serve all of these use cases. And for the different approach to SBOMs that I discuss in the paragraphs below, I think CDX 1.4 with VEX would be the right format to use.

While it is certainly valuable for CDX 1.4 to offer the capability it does, the VEX concept itself – as outlined by the NTIA in this one-page paper – encompasses all of the above use cases (some people think it should handle even more complex use cases, although I think the cases listed above are complex enough, thank you very much). Suppliers (who will be the primary entities preparing VEXes) will need to have another VEX solution along with CDX 1.4, if they want to address use cases 2-7 above (as well as others not listed).

But there’s one other point: If a supplier wants to convey information about vulnerabilities in their SBOMs (which is only possible in CycloneDX at the moment. SPDX doesn’t now have that capability, although I believe that SPDX 3.0 will, when that version is available), they have to keep in mind that they will need to update their SBOMs much more regularly than if they don’t include vulnerability information.

This is because applicable vulnerabilities change much more rapidly than do product components. A new SBOM should be issued whenever any code in the product has changed, and especially when there has been some change in a component. However, those changes will undoubtedly be far less frequent than changes in the vulnerabilities applicable to the product. As I mentioned above, a product that has thousands of components might literally require a new SBOM every hour, if the SBOM includes information on vulnerabilities. This is because it will need to be updated whenever a new component vulnerability has appeared in, or been removed from, the NVD – as well as in the VEX case, where a component vulnerability that appears in the NVD has been determined not to be exploitable.

Of course, pushing a new SBOM out to customers multiple times in a day, or even daily or every couple of days, will be a big chore, even if the whole process is ultimately fully automated. But there’s another way to make vulnerability information available to customers: the supplier can post SBOMS for all versions of all products online (in a portal with controlled access). Moreover, they can constantly look up component vulnerability information in the NVD and post that information with the SBOMs (whether or not the information is included inside the SBOMs themselves. I would recommend it not be, but I don’t think it will make a lot of difference either way). Finally, they can use VEX information (which of course comes from the supplier anyway) to remove vulnerabilities that aren’t exploitable from the list for each product.

In other words, the supplier can provide a constantly-updated list of exploitable product vulnerabilities online, without having to send a single SBOM or VEX to a customer (the SBOMs and VEXes will of course be available for download, for any customer that wants to check the supplier’s work in identifying vulnerabilities). And if they don’t want to do this themselves, I’m sure there will be third parties that can provide this service to their customers in their place – the supplier will just need to provide the SBOMs and VEXes to the third party.

In this post, I pointed out that it should be the supplier’s responsibility – or that of a third party service provider, either chosen by the supplier or by an end user organization - to develop and maintain a list of exploitable component vulnerabilities for each of their software products (and in every version of every product, at least those that are still being supported). I gave reasons for saying this in that post, but I didn’t emphasize the most important reason: that the lack of a consumer tool that will ingest SBOMs and VEXes and constantly list exploitable vulnerabilities in a product currently serves as a roadblock to SBOM adoption, let alone VEX adoption[iv]. If no tool is needed and this service is provided online by the supplier or a third party, the roadblock disappears.

To get back to CycloneDX 1.4, I think it’s a great product, and for certain lightweight VEX use cases it’s quite appropriate. But I think Patrick and Steve are making a mistake by emphasizing that capability so much. I suspect that 6-12 months from now, the SaaSBOM capability will be seen as the huge innovation at the heart of CDX 1.4, with the VEX capability being a nice-to-have but not the primary reason for users to adopt v1.4. The CDX team might want to imagine the use cases that SaaSBOM opens up and make those the heart of their efforts to promote the new version.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they necessarily shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Since Dependency Track was originally created to help software developers identify vulnerabilities in components (dependencies) that they’re incorporating into their own products, the fact that VEX isn’t available doesn’t pose a big problem. Developers want to ship products that are vulnerability-free, period – whether or not the vulnerability is actually exploitable. Software buyers will form their opinions on a product based on the number of vulnerabilities they see for it in the NVD, and the supplier won’t usually be given the chance to explain non-exploitable vulnerabilities.

[ii] Note that there’s no need for the VEX to refer to a particular component, as we’ll see in a moment. A VEX is simply an assertion of whether or not a particular vulnerability is exploitable in a particular product/version, period.

[iii] I readily admit I could be wrong about this. There are two JSON examples on GitHub, which I’ll admit I only partially understood. So if I am wrong, I hope someone will correct me.

[iv] There are at least two other such roadblocks. All three would be bypassed and cease to be roadblocks, were my proposal to be adopted. 

Sunday, January 9, 2022

Which is the right SBOM format for us?

 

Note from Tom: There may have been a glitch in the email feed that caused some people (including me) not to receive the email containing my most recent post. If you were one of those unfortunate people who didn't receive it, you may want to read it now. On the other hand, you may consider yourself fortunate when you don't receive my posts...I know there are at least a few people that fall into that bucket.

When they first start learning about SBOMs, some people are dismayed by the fact that there are several SBOM formats, and that the NTIA/CISA Software Component Transparency Initiative so far hasn’t anointed one of these as the Chosen One. In fact, the Initiative makes a point of saying that there is no reason for the software world to standardize on one SBOM format.

Moreover, neither Executive Order 14028 (which mandated that all federal agencies start requiring SBOMs from their suppliers. This will be required starting in August 2022), nor the two supporting documents that were mandated by the EO (the NTIA Minimum Elements document and NIST’s implementation guidance for the SBOM provisions, which is due on February 6 and for which draft language was posted in November) even hints that there will ever be a single “standard” SBOM format. I won’t say the day will never come when there’s a single universally accepted format, but I will say that I don’t think it should be imposed, whether by government regulations or even by some broad consensus of organizations that develop software and organizations that primarily use software (which pretty well means every public or private organization in the world).

Nevertheless, there’s a lot that can be said about formats:

1.      While the Initiative officially recognizes three formats - SPDX, CycloneDX and SWID – only SPDX and CycloneDX should be considered true SBOM formats. SWID was developed as a software identifier, and that is its primary use. While it does contain component information including the seven Minimum Elements, it doesn’t offer the richness of the other two formats.

2.      SPDX and CycloneDX differ in their original use cases. SPDX was originally developed (around 12 years ago) for open source license management – a use case that is very important for software developers, but not very important for organizations that do little or no software development. CycloneDX (CDX) was developed in 2017 for purposes of vulnerability management (it was developed for use with Dependency Track, which I discussed in my last post. Both CycloneDX and D-T are OWASP projects; in fact, CDX was designated an OWASP Flagship Project last June. Steve Springett is lead developer for D-T and co-lead for CDX).

3.      Because of their original use cases, SPDX is probably better at license management, while CDX is probably better at vulnerability management. However, one big advantage of having two major formats is that they compete with each other. The current version of SPDX is 2.2 (this version was designated an ISO standard last year). But the next version 3.0 will be a total rewrite, aimed squarely at vulnerability management.

4.      I have been hearing about v3.0 for a while. It sounds good, but since it has been under development for many years, and since I assumed it was going to be hobbled by the need to maintain compatibility with v2.2, I was not expecting it to be available soon (definitely not this year), or to be very useful when it arrived.

5.      So I was pleasantly surprised to hear Bob Martin of MITRE Corporation - who has been very involved with v3.0 development, and who is someone you run into every time you turn a corner in the world of software supply chain risk management – say last week that a) SPDX v3.0 will be available for general use this March, but b) it will not be compatible with v2.2. I interpret the latter to mean that a v2.2 SBOM will not be comprehensible to a tool that reads v3.0 SBOMs (of course, I’m sure there will be a conversion utility between the two formats).

6.      The latter item was good news. Trying to aim for compatibility with v2.2 would have severely limited how well v3.0 could accomplish its purpose of aiding the vulnerability management process. The fact that v3.0 “cuts the cord” to previous versions means it might well be a serious competitor to CDX in the vulnerability management space.

7.      However, CycloneDX isn’t standing still, either. That project will soon announce (I think imminently, but definitely by the end of January) its new v1.4, which will significantly enhance the vulnerability management capabilities in v1.3 – and I thought those were already pretty good. While I haven’t seen any specs for v1.4 yet, I know that one addition will be a VEX-like capability. That might in itself be a very significant development for several reasons, although the practice of using SBOMs for vulnerability management – as currently envisioned – will need to change, in order to take advantage of this capability. But if it’s going to change, now’s the time to do it, before anything is set in stone. The processes for using SBOMs for vulnerability management – for end users who aren’t also significant developers – is somewhere between infancy and toddlerhood.

But speaking of change, as I described recently, I no longer think the primary responsibility for identifying exploitable component vulnerabilities should rest with the end user. It should rest with the supplier, or perhaps with a third party service provider that has been engaged by the supplier or by the user. I have several reasons for saying this, but the most important is: Why should the 10,000 users of a software product all have to perform the exact same set of steps to identify exploitable vulnerabilities – and each acquire a so-far-nonexistent tool to help them do that – when it can be done much more efficiently, and of course at a significantly lower cost per user (in both money and time), by one or a few central organizations (either the supplier or a service provider)? Especially when the supplier should already be doing this work anyway?

In other words, in the long run I believe the biggest use for SBOMs, for software users that aren’t also significant software suppliers, will be checking the supplier’s (or the third party service provider’s) work of identifying exploitable component vulnerabilities. Some organizations will want to do this and will invest in the required tooling and expertise. Other organizations will decide to leave this work to the experts and just focus on the more important part of vulnerability management: given the current lists of exploitable component vulnerabilities in the software products the organization uses, figuring out where and how the vulnerabilities can affect the organization, and taking mitigation steps (primarily, bugging the developers to patch the serious vulnerabilities ASAP) based on the degree of risk posed to the organization by each vulnerability.

Even in the ideal world I’m outlining, I think most users will still want to receive SBOMs and VEXes from their software suppliers, even though they won’t bear the primarily responsibility for identifying component vulnerabilities. But the decision which SBOM format to request from suppliers (or even whether to ask for a specific format at all, as long as the supplier uses either SPDX or CycloneDX) won’t be quite as crucial as I had thought previously.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. 

Wednesday, January 5, 2022

“Weaponized” coffee makers?


This morning, I had a conversation with Tim Walsh of the Mayo Clinic, one of my “TEAMS friends” from the SBOM community (Tim was one of the original members of the Software Component Transparency Initiative and is still quite active in the group).

He mentioned that the Mayo Clinic has several hundred thousand devices on their network (the average hospital room has 13 networked devices), and he would love to have SBOMs for all of them. But he’s not just talking about infusion pumps and heart monitors. He’s talking about coffee makers and vending machines, since they’re all network connected now. He pointed out that a huge percentage of devices have at least some possibility of code written in Java, and that those are all very likely to have log4j, since that’s the fastest and easiest way to implement logging in that environment (it’s also free, of course).

So even though a hack of an infusion pump poses a direct threat to patient safety and a hack of a coffee maker doesn’t, hacking into a coffee maker can give the attacker a platform from which to attack lots of other devices that can affect patient safety (or the financial health of the hospital, of course). I don’t know if Tim has started patching the coffee makers yet, but he’s at least going to watch them for signs of compromise.

Of course, widespread availability of SBOMs would have saved a lot of IT people a lot of time since log4j came out. Since they’re not widely available now, this means that users have to hear from the software suppliers about whether or not they have any log4j in their product. And Tim pointed out that Mayo didn’t wait to hear from the suppliers; they’re patching all of the devices now, on the assumption that they’re all affected until proven otherwise.

This means that what’s just as important as a supplier notification that their product is affected by log4j is a notification that their product isn’t affected by it – so people like Tim don’t have to preemptively patch it. This is why I was quite pleased to see the notification today from Fortress Information Security that their three main software products aren’t affected by either of the log4j vulnerabilities.

Not only did they provide this notification in a blog post, but they also attached a VEX for each of the products. While these won’t do you any direct good at the moment, since tools aren’t available to read the VEX format (and the format is still being worked out), it’s interesting to see them, nonetheless.

I certainly recommend that other software suppliers provide a similar notification to their customers for every product of theirs that isn’t affected by log4j. You’ll be saving them a lot of unnecessary work. In fact, you might go further than that and point out the module that’s affected (log4j-core). So if that module isn’t present in the product, they don’t need to worry.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they necessarily shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.