Thursday, July 28, 2022

SBOMs for devices vs. SBOMs for software

It has always struck me as odd that only one or two of the documents published by the NTIA Software Component Transparency Initiative (which ended last December) mentions SBOMs for intelligent devices, even though the Initiative got its start after the FDA announced in 2018 that they were going to require SBOMs for medical devices in the future (which turns out to be this year). SBOMs will be one small part of the “pre-market” approval process – meaning the device can’t be sold to hospitals unless the manufacturer has provided a satisfactory SBOM to the FDA.

In fact, the “laboratory” in which SBOM concepts were tested was (and still is) the Healthcare SBOM Proof of Concept, in which medical device manufacturers and some large hospital organizations (known in the industry as HDOs, which stands for “healthcare delivery organizations”) exchange SBOMs and document lessons learned. The ongoing work of that group was essential to the NTIA effort and is now informing the CISA effort (although I believe the PoC is now officially under the wing of the Healthcare ISAC). All the SBOMs exchanged in the PoC since 2018 are about medical devices, although the PoC would like at some point to start addressing standalone software products utilized by hospitals, of which there are many.

The reason that SBOMs for devices aren’t mentioned, in any but one or two documents published by the Healthcare PoC itself, is that there has been a dogma that there’s no difference between an SBOM for a device and an SBOM for a software product that isn’t tied to any particular device. And while it’s true that the form of the SBOM is the same in both cases, what is very different are the actions the manufacturer of a device needs to take to help their users identify and mitigate vulnerabilities found in the software and firmware products installed in the device. After all, those actions are what really matters, right?

This spring, I wrote an article with Isaac Dangana, who works for a client of mine in France, Red Alert Labs, which focuses on certification and standards compliance for IoT devices. The article addresses the question of how SBOMs for devices differ from SBOMs for “user-managed” software – which means everything that we normally refer to as “software”; it concludes with a set of actions that we believe IoT users should require (or at least request) of the manufacturers of IoT and IIoT devices that they use.

The article was for the Journal of Innovation, which is published by the Industrial IoT Consortium (IIC). It was published yesterday. You can download a PDF of the article here and you can view the entire issue here. I and Isaac will welcome any comments!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, July 27, 2022

What SBOM contract language should we require? How about “None”?

 

As we approach the August 10 deadline for federal agencies to start requiring SBOMs from their “critical software” suppliers, the question comes up regularly[i], “What contract terms should we require for SBOMs?” I used to take such questions very seriously, stroking my chin and intoning, “That’s a good question. This is something that needs to be addressed soon” – or some incredibly wise statement like that.

I’ve said that, even though I’ve always been skeptical about the usefulness of cybersecurity contract language in procurements. I’ve always believed that the important thing is to get the vendor’s agreement to take steps to mitigate a particular cybersecurity risk (e.g., implement multi-factor authentication for the vendor’s remote access system). This could be through including terms in a contract, but it could also be through getting them to state their agreement in an email or letter.

However, I’ll admit that commitments made in letter or email are de facto contracts, and an organization can be held to those commitments almost as easily as when they’re included in a contract. If a vendor balks at agreeing to contract terms, they’re unlikely to agree to essentially the same terms in either a letter or an email.

This is why I think it’s fine, in most cases, to simply get the vendor’s verbal commitment to do something, then document who made that commitment and when. You won’t do this in order to nail the vendor in a court of law because of their verbal commitment; you probably can’t do that, and it would probably result in the mitigation not being made for years, anyway. However, it’s amazing how many people seem to think that the goal of supply chain cyber risk management is to get the vendor to agree to improve their cybersecurity in a contract, but not to get them actually to do what they promised. These people think their job is finished when the contract is signed.

Here’s some news: Getting a vendor to sign a contract mitigates zero risk. The risk only gets mitigated when they do what you asked them to do. That’s true, no matter whether you asked them to commit in contract language (signed in blood or just plain ink), an email, a letter, a phone call, a conversation at a restaurant, a message in a bottle, Morse code, cuneiform tablets, whatever. You have to follow up with them – often repeatedly – to ensure they do what they said they’d do. Otherwise, whatever time you spent getting them to sign the contract, or commit in any other way, is wasted.

But, if the vendor has agreed to do something in a contract, isn’t it likely they’ll keep their promise? That depends on the vendor, and what actions you take if it appears they haven’t done what they agreed to do. Is your company’s policy to threaten to sue a vendor as soon as they fall a day behind the commitment they made, then put more and more legal pressure on them until they capitulate? If so, you’d better have a lot of lawyers on staff with nothing better to do than harass vendors.

Here’s an experiment: Ask your procurement people how many times they’ve sued a vendor about anything, let alone a cybersecurity term. When I’ve done that (admittedly not with huge companies, since they won’t answer the question at all), the answer is always that they’ve never sued over cybersecurity terms (or anything else having to do with performance. It’s almost always financial). Moreover, in most cases, the company has sued any vendor for anything at all fewer than maybe five times. Lawsuits are only an enforcement mechanism of last resort; they should never be your first resort – and they shouldn’t be threatened as your first resort, since that just reduces your credibility with the vendor to zero.

The fact is, if a vendor is doing at least a reasonably good job for your organization and they’re liked by the engineers or whoever is dependent on the vendor’s product or service to get their job done, you’ll never sue them over anything. Consideration of the effort and cost of finding a new vendor – and the strong possibility that whatever new vendor you settle on won’t measure up to the one you just fired – will almost always lead to your organization settling whatever dispute you had with the vendor.

So why even pretend that lawyers need to be involved? Sit down with the vendor as soon as a cybersecurity issue comes up and figure out a solution you can both live with: That is, they’re sure they can achieve the objective, while you’re sure that the value of whatever you gave up in your negotiations will be far less than the cost of retooling or retraining for a new vendor.

However, with SBOMs, it’s even more cut-and-dried: Given that SBOMs are just starting to be distributed to customers in dribs and drabs (although software developers are using them heavily now for internal product risk management purposes) and end uses have just about zero experience using them in their own cyber risk management programs, it makes no sense now even to talk about contract terms for SBOMs and VEX documents. Before contract terms will ever be useful, there needs to be widespread experience with SBOM distribution and rough agreement on best practices for mitigating those risks. But that agreement is years away and needs to follow experience, not precede it. That will be years from now. I estimate it will be 5-10 years before SBOMs are being widely distributed and used, and the world has at least a couple years of experience with them. Then we can think about real contract language.

However, I know that most companies of medium-to-large size require a contract with every purchase, and more and more companies want to mention SBOMs in every contract for software; of course, that’s a good thing, and I recommend it as well. However, the term should read something like, “Vendor will provide software bills of materials in a format and frequency to be agreed upon between Vendor and (our organization).”

Once that’s agreed on, then the vendor’s product security team and the customer’s supply chain cyber risk management team need to discuss the details of what the vendor will actually deliver. What should the customer ask for? If SBOMs and VEXes were already widely distributed and widely used, I would ask for at least the following (I’m sure I’ll think of other terms later):

1.      A new SBOM needs to be provided whenever anything in the product has changed, including major and minor updates, patches, new builds of existing versions, etc.

2.      Whenever a new SBOM is produced, the supplier should provide both an SBOM produced as part of the “final build” of the product and an SBOM produced from the actual binaries that are distributed. These include not just the software product binaries, but any container, installation files, “runtime dependencies”, etc. – anything that will remain on the system after installation and which may develop vulnerabilities, just like the product itself can.[ii]

3.      The supplier should provide a valid CPE identifier for every proprietary component in the product, and a valid CPE or purl identifier for every open source component.

4.      The supplier should not provide an update to the product that contains any exploitable vulnerabilities that pose a serious risk (I’m sure most current contract language regarding software vulnerabilities currently relies on the CVSS score as a measure of risk. However, people who know more than I do tell me that CVSS scores don’t really measure risk, or anything else that’s useful. If so, we need another readily available score to take its place. The EPSS score might be that, although I don’t know how widespread its use is. It measures the likelihood that a vulnerability will be exploited, although I need to point out that this is different from the exploitability addressed in a VEX document. See this post for more information). If the supplier can’t avoid doing this in some case, they need to discuss with your organization mitigations that might address the risk posed by these vulnerabilities.

5.      The supplier will patch new vulnerabilities that pose high risk within (15-30) days of publication in the NVD, availability of exploit code, or some other acceptable event.

6.      The supplier will provide a VEX notification that a vulnerability identified in the NVD for a component of their product is in fact not exploitable in the product itself. This should be provided as soon as possible after the supplier determines the vulnerability is not exploitable.

These are all worth discussing with the supplier, but you shouldn’t expect to get them to agree to any of these items for a few years (except for numbers 4 and 5. These aren’t dependent on SBOMs being released, so the supplier should be making commitments like these already). But that’s OK. Find out what they can commit to and agree on that, even though it will just be a verbal agreement.

And for heaven’s sake, give the lawyers the day off. Tell them to come back in 5-10 years, when it will be time to discuss specific contract terms regarding SBOMs.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] It comes up from private industry, which of course isn’t subject to the EO; the federal agencies have to utilize the FAR (Federal Acquisition Regulation) and thus don’t have much, if any, control over their contract language.

[ii] I’ll admit that this particular “requirement” is something that I personally think is needed – not necessarily the NTIA, CISA, The Tibetan Book of the Dead, Baháʼu'lláh, or any other entity.

Saturday, July 23, 2022

Everybody needs to be more creative in addressing vulnerabilities

 

Almost predictably, Walter Haydock has produced another excellent post dealing with vulnerability management. I recommend you read it.

You will notice that the post addresses concerns of software developers, not end users. Since I don’t think too many developers read my post for development advice (for that matter, I don’t think too many hog producers read my blog for hog production advice, either. The two are about equally likely to occur), you may wonder why I’m suggesting that you, an end user of software, read it.

The first and perhaps more obvious reason is that end uses need to know what’s reasonable to require of suppliers regarding vulnerability management and what isn’t. If you look at Waler’s post from that perspective, you can read it as saying in general that requiring suppliers (in contract language or simply in an email) to follow rigid rules like “Never release a product with vulnerabilities having a CVSS score of >7.0” will sometimes be counter-productive (as Walter explains, this might mean the supplier will take longer to patch a serious vulnerability, if another serious vulnerability appears just before the deadline for the supplier to patch the first one). He suggests a more nuanced approach.

And while I’m at it, you should keep in mind that requiring a supplier never to release a product if there are any vulnerabilities in it could lead to the supplier ceasing to report vulnerabilities to the NVD at all (remember, by a large margin, most vulnerabilities are reported to the National Vulnerability Database by the supplier itself). Unless assiduous security sleuths are searching for and reporting vulnerabilities in that supplier’s products (and I doubt that happens for any but a tiny fraction of software products and intelligent devices today), soon there won’t be any vulnerabilities that show up in a scan of the product, since none will be found in the NVD. Problem solved (from the supplier’s point of view, anyway). The fact is that patching all vulnerabilities, regardless of severity or exploitability, is a fool's errand.

The second reason why a software user should read Walter’s post is the careful, balanced way he approaches vulnerability management. This applies both to this post and his posts on vuln management that are aimed at end users (note to Walter. Please put links to one or two posts focused on end user vuln management in the comment section below, as well as in the comments on this post in LinkedIn). 

Walter always keeps the North Star of maximum possible risk mitigation in front of him when he writes these posts, but he’s also careful not to become dogmatic about that. This is especially true when striving for maximum risk reduction would require a complicated process that would make Rube Goldberg blush with envy. It’s much better to provide advice that people can actually follow, as opposed to advice that might win kudos with the risk management freaks but would be impossible to follow in practice (or even counterproductive, as Walter regularly points out).

Tim Roxey emailed me the following commment on this post:

Failure of imagination was a blue ribbon panel finding for the 9/11 commission. It is a finding of many of the attacks I have studied both inside CONUS and on the events the US gov had asked me for support in other locations globally. Including war zones. 

It’s simplest observable is in the question “who would do that?  Or “wait, what? 
Cognitive dissonance. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, July 20, 2022

Meanwhile, back in Texas…

The ERCOT grid in Texas has been back in the news lately. At first, there was a lot of speculation that the current heat wave would bring about another debacle like the Valentine’s Day 2021 disturbance, in which - in the early morning hours of Monday, February 15 – the ERCOT  grid (which covers most of Texas, other than El Paso and parts of eastern Texas and the Panhandle) came within 4 minutes and 37 seconds of what T&D (Transmission & Distribution) World described as “total collapse”.

Had that happened, the huge turbines that power much of the grid (especially in Texas) would possibly have been damaged, meaning it might have been more than a few days before the whole system was up and running again – if that. Some people might literally have been in the dark for months. The official death count from this incident is 246, but estimates go as high as 700. This compares with six deaths in the great Northeast Blackout of 2003, which covered a much larger area as well as 55 million affected people (vs. 4.3 million affected in ERCOT).

Fortunately, nothing like that happened this time – at least not yet. The worst that has happened so far is that ERCOT had to warn people about conserving power and held out the possibility of rolling blackouts. However, here’s the point: The last time I checked, this is the third decade of the 21st century (I believe that’s true in Texas as well as Illinois). Why should Texas’ grid be continually skirting the edge of disaster (and the 2021 outage wasn’t the first serious winter outage), when the rest of the country doesn’t have this problem – although smaller outages occur everywhere from time to time?

The answer is simple: Everywhere else in North America except for Texas and Quebec, the state/province has AC ties to the larger grid (in other words, every non-Texas, non-Quebec city is part of either the Eastern Interconnect or the Western Interconnect – although I believe that, in the northern parts of Canada, the notion of “connected” becomes pretty tenuous. I think that area is more of an archipelago than an interconnected grid). But neither ERCOT nor Quebec is connected by AC to any larger grid. They both have DC connections to the outside world, but not AC.

Why is there a difference between AC and DC connections? AC connections are governed by the laws of physics and nothing more. If a disturbance occurs on one side of the connection, its effects will immediately be felt on the other side (a serious disturbance in southern Florida in the early 2000s was detected in less than a second in Alberta), and power will flow in one direction or the other to equalize any difference (although this may lead to wild oscillations for a brief period of time, in which power flows in one direction, then the other. In the 2003 event, for a minute or so, there were massive power flows going in one direction then the other, across the northern shores of the Great Lakes, between the East Coast and Michigan. They reversed direction every second or two).

However, DC connections won't adjust to compensate for a disturbance on one side, like a drop in frequency, since DC doesn't travel in waves like AC does. In addition, the amount of power flowing through any single line, AC or DC, isn't going to make any real difference if there's an ERCOT-wide disturbance. Thus, even though Texas has DC connections to the Eastern and Western Interconnects (and to Mexico), they couldn’t have prevented the 2021 event (although ERCOT did draw on them to help in recovery from the event). Sure, the cold weather was affecting neighboring states as well, but had Texas had AC connections to them, the neighboring states would have almost instantaneously drawn power from their neighbors, so there would have been at least some help for ERCOT when it was needed. After all, the same cold was experienced by nearby states like Louisiana, with nothing like the consequences in Texas.

Why isn’t Texas part of the larger grids? It’s the result of deliberate decisions, meant to avoid the "problem" that they would be subject to federal regulations that only apply when there are AC connections crossing state lines.[i] I used to think that Texas had deliberately cut their connections with the rest of the country, but in fact, they never had them in the first place. The Federal Power Commission (the predecessor of FERC) was founded in 1920, as utilities in different cities – which had previously been electrical islands - started connecting to each other and needed some common rules. In 1935, President Roosevelt signed the Federal Power Act, which gave the FPC the authority to regulate interstate power sales; that has continued under FERC. By avoiding interstate sales, ERCOT is thus generally not subject to FERC’s jurisdiction.

Developing AC ties with the other grids would not only bring much-needed stability to the ERCOT grid, but it would open up a much wider market to Texas’ huge – and growing – wind power industry. Those wide-open spaces in Texas (typical Texas directions: “Drive straight for 8 hours. Turn left at the Dairy Queen”) are perfect playgrounds for wind. In fact, with 33,133 megawatts – i.e., 33 terawatts – of wind power capacity installed, Texas has three times as much capacity as the number two state, Iowa (with 11 terawatts). Texas wind producers would be able to share their blessings with other states, rather than being confined to Texas customers. This would be a win for the other states, as well as Texas.

However, there is one advantage of not being connected with AC to the rest of the country: serious disturbances in other states don’t automatically propagate into ERCOT. I don’t know of any case where Texas avoided a serious blackout due to being isolated, but I do know that, in the 2003 Northeast Blackout, Quebec was untouched by the devastation. Meanwhile, almost the entire province of Ontario, their next-door neighbor, was blacked out - in some parts for days.

However, Quebec felt the disadvantages of isolation in 1989, when a solar storm, which didn’t affect their neighbors much, left the entire province blacked out. In general, there’s a lot to be said for being connected to the neighboring grid, and much less to be said for being disconnected from it.

However, the biggest reasons for being interconnected are financial: In the 2021 incident, wholesale market-based power prices in Texas spiked from in the $20-30 per megawatt/hour range to $9,000. Spiking is bad, of course (they spiked to the current limit of $1,200 for a brief period last week), but this was made catastrophically worse by the Public Utility Commission of Texas. In a six-minute meeting on Monday morning at the depths of the crisis, they removed the normal $1,200 cap on prices and set it at $9,000, so that supply could match demand.

That was perhaps necessary at that time, but the PUC left the cap at $9,000 for four days, by which time the market price had dopped back to around $20. And ERCOT contributed to the stupidity by telling the generators (all privately owned) that they could set their prices at $9,000 for all four days. What was the final bill that consumers and some utilities[ii] face for all of this? $29 billion.

Could this have been avoided, had ERCOT been connected to the rest of the world? Absolutely. Let’s assume ERCOT had been connected to the Eastern Interconnect in 2021. When the PUC and ERCOT artificially pushed the price to $9,000 and left it there for four days, every other generator on the Interconnect would have been doing everything they could to ship power to Texas. The price would have come down very quickly, although it would probably have been well above $20 for at least a couple of days. However, the shortfall would have been well short of $29 billion.

So, Texas is penalizing itself severely by insisting on remaining separate from the rest of the US power grid. Part of that penalty is the people who died in 2021. Another is the $29 billion that Texans will pay (the details aren’t worked out yet, and the generators who realized the windfall will undoubtedly be forced to take a haircut) for that event.

But there’s an even bigger price that Texas will have to pay, if it wants to have a stable grid but wants to stay disconnected. There have to be a lot of investments in winterization, of course. But there needs to be more baseload generation (and probably fossil), so that ERCOT will finally have the reserve margin it needs to weather whatever the changing climate throws its way. They can’t really expect to remain hanging by a thread, hoping the next blow isn’t the fatal one.

In other states, the reserve margin consists mostly of generators in other parts of their Interconnection. In the Eastern Interconnect, it’s guaranteed that a winter storm in the Northeast won’t at the same time be matched with winter storms everywhere east of the Mississippi (the approximate boundary between the two Interconnects). And a hurricane in the Southeast won’t be matched simultaneously by hurricanes in say Chicago and Detroit (it's been quite a while since we had a hurricane in Chicago - like never). But Texas can’t count on anyone but itself. Because of that, they need to build new generation capacity as if the rest of the US were a vast wasteland, with no power to send to it in an emergency. That's another huge - and unnecessary - financial cost to add to the $29 billion from last year.

This is a purely self-inflicted burden, but Texas seems to want it that way.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Even though the NERC Reliability Standards, including the CIP cybersecurity standards, are federal regulations and technically not mandatory for them, ERCOT voluntarily complies with them. But there are other regulations, usually enforced by FERC, that ERCOT doesn’t follow. 

[ii] Actually, a lot of generators were victims of this as well. They had obligations to provide power during the outages, even if they were shut down by the cold weather. So they had to pay whatever was demanded for power, in order to fulfill their contracts.

Sunday, July 17, 2022

Another word to my Russian friends (?)

Those whom the gods wish to destroy, they first make mad.

- Ancient Greek saying

In the summer, there’s a significant drop in readers of this blog on weekends. Of course, that’s quite understandable. So, I was surprised on Saturday to see a big increase. I’ve seen unexplained jumps before, and there’s one cause I always suspect. When I checked, I saw my suspicion was correct: my Russian “friends” have suddenly decided they need to read my blog all at once.

Other countries seem to discover the blog, read it intensely for a week, then drop back in the rankings; but there’s always some residual traffic. The Russians seem to jump in wholeheartedly, then in a few days almost totally disappear. The first time this happened was in 2018, right after Rebecca Smith’s Wall Street Journal article on a presentation by the DHS NCCIC (one of the predecessor agencies of CISA) caused a worldwide firestorm. The presentation detailed an extensive supply chain campaign by the Russians against the US power grid. Rebecca’s article led to DHS walking back everything said in the presentation over a couple of months, with four mutually incompatible stories (and even more mutually incompatible stories later, just for good measure, it seems).

BTW, from that time until the SolarWinds attacks were discovered, DHS was completely absent from the Russia beat (and no other agency was taking up the slack). As a result, the 15-month campaign by the Russians against SolarWinds, which Microsoft estimated required 1,000 people to execute and was entirely launched from US-based servers, flew completely under our radar. If DHS hadn’t been forced by higher-ups at 1600 Pennsylvania to cease and desist from all investigation of Russia because of Rebecca’s article (which I’m sure is what happened), maybe we could have averted at least some of the tremendous damage caused by the SolarWinds attack. Such is the cost of political interference with cyber investigations.

Back to our story: During the week and a half after Rebecca’s article and my first post the next day, I had many more Russian readers than American ones. That trend continued (not as pronounced, though) for a few more weeks, but then my Russian readers disappeared again (I wrote at least 10 or 12 posts on this topic, over the next month). However, in December 2018, I got hit with another huge spike of Russian interest, for no apparent reason at all; I wrote about it in this post.

In the post, I chided my “Russian friends” for living docilely under the reign of one Mr. Putin, who – it seemed to me – was completely squandering the tremendous technical talent in Russia (at least it was present in Russia at that time. That’s much less the case today). I’ve since written a number of times about the wonderful Mr. P, as well as about Gen. Valery Gerasimov, the Chief of Staff of the Russian army – another wonderful person who just loves to threaten the US with nukes and everything else (as does Mr. P himself). Note there’s another Gen. Gerasimov, of a much lower rank, who was killed in the Ukraine war earlier this year. At first, I got my hopes up that it would be Valery, but no such luck.

Now, to my Russian friends who have been reading my posts: Since you’re reading them, I assume you’re in technology in some way. You’ve probably already seen a lot of your friends leave the country since Mr. P decided it was a great idea to invade Ukraine, thus proving that just because someone talks like a strategic thinker doesn’t mean they act on anything more than blind impulse. Why haven’t you left with them? I hope it’s because you want to stay in Russia and undermine the Putin regime to the best of your ability, although I don’t have to warn you that is dangerous to your health.

But, if you support the Ukraine War (and supposedly 80% of Russians say they support it, although polls in Russia are always suspect. Nobody can be sure their answer won’t find its way to the local officials, who might feel inclined to make their life unpleasant) or at least maintain calculated ignorance about it, I’ll tell you what I compare you to: the man who jumps off the top of the Sears Tower in Chicago. As he passes the 50th floor, he calls out, “So far, so good!”

It might appear to you that Russia is doing well even now, but you’re being fooled by a Potemkin Village (I assume I don’t have to tell you what that means). Russia has passed at least the 75th floor and is heading rapidly toward the ground. “Ah, but the ruble’s doing great!,” you say? Exchange rates are primarily determined by the balance of trade and balance of payments. When you’re not importing anything at all because of the embargo, but you’re still selling about a billion dollars a day worth of oil and natural gas to Europe and elsewhere, of course your balance of trade looks great! But soon the exports will be down to zero, since Europe will stop buying oil and – a few years later – natural gas from you. Moreover, you certainly can’t sell anything else to the West other than commodities, since you no longer have access to the chips required to make virtually anything nowadays (that’s not true. My knives and forks don' have chips in them, at least not yet).

But what are all the imports you have suddenly realized you can do without? Chips, for one. Aircraft parts are another. And therein lies a story: Russia has about 200 aircraft (Boeing and Airbus) that are on lease from Western companies. The leases have essentially been terminated for lack of payment, but Putin is holding onto the aircraft. His main use for them seems to be to cannibalize them for parts, to make up for the lack of new parts coming in. Do you think that, no matter what happens in the Ukraine War, the West isn’t going to demand payment for those aircraft?

And more to the point – again, regardless of the outcome of the war – do you think Russia isn’t going to end up paying a hefty part of the cost of rebuilding Ukraine, which of course goes up literally by the minute? Even more to the point, it seems Mr. P has decided that, since the Donbas offensive is at the moment going nowhere (perhaps because the new long-range artillery provided by the US and European countries is having a significant effect on the supply lines for the offensive, since it can hit ammo dumps and fuel sites 40 and 50 miles away), to keep his troops happy, he has to let them unleash literal terror attacks on Ukrainian cities. A hospital here, a shopping center there – all military targets, to be sure. Do you think Russia isn’t going to be held to account in some significant way for the huge civilian deaths, which seem to be the only thing the Russian military is able to achieve?

Did you see the photo of the young girl’s feet protruding from the rubble of an apartment building that was hit by a rocket attack very recently? 3 kids and 20 adults were killed in that attack – undoubtedly, all military personnel. The mother’s legs were lying nearby, but she survived – not that she wanted to survive, having lost her daughter in that way.

And how about the hundreds of citizens of Bucha who were killed by soldiers of the Wagner group, the mercenary organization that carries out civilian atrocities in the service of various murderous dictators in Africa, with Mr. P as their paymaster? They used their wonderful experience with murder to kill citizens of Bucha and other towns. Mr. P made sure to give them medals after Bucha. However, there is some good news in this: The Wagner group seems to be the only Russian company that’s in growth mode now. Killing people is now one of your growth industries. I guess it’s a substitute for chips.

Do you really think that, after all of this is finished, the civilized world (which doesn’t include Russia now, of course) is going to just forgive and forget all these things? Do you really think that, if you decide to stay in Russia and Putin isn’t replaced by an actual democratic government – not just a “government” by one of his sub-thugs – every current Western sanction won’t stay in place for many years, to be followed by many new sanctions (at least, I hope that happens)? Do you really think that you won’t face a living standard that will decline year after year (forecast GDP decline this year: 10%. That’s huge, in case you didn’t know), so that in ten years or so, Russia will be back in the Middle Ages, perhaps with the serfs once again supporting the economy as semi-slaves?

My friends, you’re living in a fool’s paradise. Your only hope is a) to leave the country ASAP or b) do what you can to replace Mr. P and his kleptocratic friends with an actual Western-style democracy. Otherwise, you’re probably condemning yourself to a life of poverty and drink (the favorite Russian sport nowadays, I imagine – but that’s nothing new).

However, other than that, things aren’t so bad…

Note from Tom Monday 7/18 7:24PM: After having hundreds of hits from Russia on Saturday and Sunday (which prompted me to write this post), I went to literally zero in the last 24 hours. This leads me to believe the post was blocked by the Russian censors soon after it went up, so this never reached the main people I was writing it for. However, I was also writing it for Western audiences. Russia needs to be treated as a pariah and criminal from now on, until Putin and his friends are removed from power in one way or the other. As long as it's permanent removal, I'm not picky about how it's done.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, July 14, 2022

What’s a VEX? What isn’t a VEX?


The CISA SBOM “listening sessions” kicked off this week, and they were all good – and very well attended. There are still two more next week, so you might want to check them out, if you’re at all interested in SBOMs. No registration is required, and you can get the times and connection information here.

One thing that struck me during the conversations is how many people spoke very knowledgably about VEX documents – and had completely the wrong idea about them. They were not only wrong, but they were 180 degrees wrong.

However, I don’t blame them for this. Until a few months ago, the only official publication on VEX was a one-page document published by the NTIA last fall. Having prepared the first draft of that document, I thought it was very good, but it’s now very much out of date and shouldn’t be relied on. There are now two good documents – one of which came out yesterday – that are useful for particular topics, but neither one constitutes a comprehensive introduction to VEX and how VEX documents can be produced and used. The two documents are both available at the first link above. I also have produced a number of posts on VEX, which you can find by searching on the term in the search bar at the top left.

The problem with what these people said is that they think VEX is a vulnerability report – a list of vulnerabilities that apply to components found in an SBOM, or maybe vulnerabilities in the product itself. What’s confusing is that a VEX can be exactly that. It can say (in JSON), “This component included in product X version Y is affected by CVE-2022-12345.” Or perhaps, “Product X version Y itself is affected by CVE-2022-12345.” Given the dearth of official information on VEX, it is quite understandable that people think this must be the purpose of a VEX. After all, they’re used to receiving notifications like that from their software suppliers now.

However, if that were the purpose of VEX, there would have been no need to invent the format (actually “formats”, since there are two, although they’re very different from each other). Yes, VEX is machine readable, and most of us are only used to seeing human readable vulnerability notifications in the form of emailed PDFs or web page notices. But machine-readable vulnerability notifications have been around for at least seven years. In fact, one of the two VEX formats is based on the CSAF vulnerability reporting format, which gives the supplier a huge range of options for reporting vulnerabilities. However, that VEX format itself just utilizes a small percentage of the available fields in CSAF. If your aim is to make the type of vulnerability reports that you’re used to seeing in PDFs, you would be much better off using the full set of CSAF fields, not the VEX subset.

VEX was developed because there was a need for a machine-readable format that said the exact opposite of what a traditional vulnerability report says. Instead of saying, “Product X version Y is affected by CVE-2022-12345”, a VEX says “Product X version Y is not affected by CVE-2022-12345”. At first glance, that might seem like a strange thing to say. After all, there are already more than 20,000 CVEs in the CVE database, and more are added all the time. Assuming a product is currently affected by ten vulnerabilities, does that mean the supplier needs to put out both a traditional notification that the product is affected by those ten CVEs, and t a VEX that lists the 19,990+ vulnerabilities it’s not affected by?

Fortunately, that’s not what VEX was developed for. VEX was developed to deal with the problem that, when you receive an SBOM and look up (in the NVD or another vulnerability database) the vulnerabilities applicable to the components, for every 20 vulnerabilities you identify, usually 18 or 19 of them (maybe 20 in some cases) will not be exploitable[i] in the product – meaning you wouldn’t find those vulnerabilities if you scanned the product with a vulnerability scanner, and a hacker wouldn’t be able to utilize them to attack the product. You don’t have to take any further action regarding the non-exploitable vulnerabilities.

Now you probably think I’m crazy (and there would be some merit to that diagnosis, I’ll admit). Why is it a “problem” if it turns out that 19 out of every 20 vulnerabilities identified in components of a product aren’t exploitable in the product itself? After all, you’ve just reduced your vulnerability management workload to 5% of what it would have been otherwise.

It’s a problem because, after you find those vulnerabilities in the NVD, you have no way of knowing which are one of the 19 non-exploitable vulnerabilities, and which is the exploitable one. You need to look for all of them on your network and then you need to call up and interrogate your supplier’s help desk staff about when they’re going to fix all 20 vulnerabilities.

But if you do harass the supplier about all 20 CVEs, the tired help desk person is going to tell you, for 19 of them, “Yes, I understand that CVE applies to component A in our product, but actually – as I’ve already told 245 people today – the CVE isn’t exploitable in the product, so you don’t need to worry about it anymore.

And then you’ll feel like an idiot, both for wasting the help desk person’s time and, most importantly, wasting your own time. Wouldn’t it be great if you could have learned about the non-exploitable vulnerabilities before you started pursuing them?

This is the problem that VEX was invented to solve. After you receive an SBOM, you’ll receive VEXes as the supplier realizes that certain vulnerabilities aren’t exploitable in their product, even though they are listed in the NVD for a component of the product. You will sleep better at night and the supplier will sleep much better, knowing they don’t have to hire another 50 help desk people to handle all the calls about vulnerabilities that turn out to be non-exploitable (in fact, one of the two huge suppliers that sparked the development of VEX in 2020 estimated that they would get thousands of false positive help desk calls for exactly this reason, if there weren’t some way of notifying users about non-exploitable vulnerabilities).

Of course, VEXes can be used to create positive vulnerability notifications as well – i.e., “normal” ones. And there are uses for positive notifications in VEX, usually in combination with negative notifications. For example, when a supplier has patched a serious vulnerability and wants to let their customers know about the status of the vulnerability in all versions of their product – i.e. whether each version is vulnerable or not vulnerable - they can issue a VEX that provides all this information in one place, in a machine-readable format. This post discusses that use case.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] For a discussion of exploitability, see this post.

Tuesday, July 12, 2022

One month ‘til EO compliance is due. Do you know where your SBOMs are?


Executive Order 14028, Improving the Nation’s Cybersecurity, was issued on May 12, 2021. It includes a provision regarding SBOMs, as is fairly well known. The EO itself didn’t include any due dates, but it delegated the task of administering compliance for the whole government to the Office of Management and Budget (OMB).

OMB produced a memorandum last August 10, which discusses the different phases of compliance and an overall timeline. It includes this sentence (on page 3):

Within one year of the publication of this memorandum, agencies must implement the security measures designated by NIST for all categories of critical software included in the initial phase.

I did some advanced math and calculated that the due date is less than one month from now (I originally had the 12th in mind, which is why I wrote the post today. Oh, well. The content’s what matters).

So given that there are 28 or 29 days until compliance, what do federal agencies have to do regarding SBOMs? There’s a big list of requirements. Are you ready for it? Here goes…

1.      They have to ask their software suppliers for an SBOM.

That’s it. Of course, if the supplier doesn’t provide them at least one SBOM, the agency isn’t on the hook for non-compliance, since they tried. And the supplier isn’t on the hook, either, since the EO only applies to federal agencies.

However, I don’t doubt that most suppliers will be able to provide the agency with at least one SBOM for every product the agency obtains from them. How do I know this? Because software suppliers are using SBOMs very heavily now, as described in this post. It turns out that the suppliers find SBOMs very helpful for managing software component security. Who would have thought? My guess is most suppliers won’t have a problem sharing their SBOMs (and the fact that it’s mandatory under the EO will also mean something to them, even though neither they nor the agency will end up in the slammer if they don’t comply at all).

However, you might ask, “What are those agencies supposed to do with the SBOMs, once they have them?” Ah, there’s the rub. The EO says nothing about that subject, but it did delegate the task of providing all of the guidelines regarding EO subjects to NIST. NIST put out a document (actually, a first draft of the first revision of NIST SP 800-161, the supply chain cybersecurity framework) in December that, among other topics, addressed SBOMs.

Here is my generally-positive-but-not-completely-so review of the document. The big problem with the document, IMHO, is that the SBOM portion of the document (pages 242-246) was written for federal agencies, yet all but two sentences apply to suppliers, not the agencies. Granted, the agencies need to know what to ask the suppliers to do regarding producing and delivering SBOMs, so that part is generally useful.

The problem is that the two sentences (on page 244) are NIST’s only answer to the question I asked above: What should the agencies do with the SBOMs when they receive them? And here are the two sentences:

Develop risk monitoring and scoring components to dynamically monitor the impact of SBOMs’ vulnerability disclosures to the acquiring organization. Align with asset inventories for further risk exposure and criticality calculations.

To be honest, if I were to develop a two-sentence summary of what an agency – or any end user organization – should do with SBOMs, I couldn’t come up with something much better than that. On the other hand, those two sentences are far too general to provide any real guidance to an end user organization. In fact, the only real end-user guidance I know of is this document produced by Fortress Information Security (for whom I do consulting, although I wasn’t one of the authors of the document). Literally everything other SBOM document that I know of is aimed at software suppliers.

But the lack of official guidance is only one problem facing the federal agencies when they start receiving SBOMs in August. The other problem is that there’s only one complete software tool for end users, that will ingest both SBOMs and VEX documents and output a list of exploitable component vulnerabilities in the software. That is Dependency-Track, which – believe it or not – was developed ten years ago by Steve Springett (co-leader of the CycloneDX SBOM format project). I wrote about Dependency-Track about two weeks ago, when it became the first complete tool after it started ingesting VEX documents, along with SBOMs (which it has ingested since 2012, before the term “SBOM” was being used.

Dependency-Track is very good and is very heavily used by software suppliers (as the first post I linked above shows), but it’s open source, which is a problem for some organizations. So there really need to be other products and especially services that will utilize both SBOMs and VEXes to manage software component vulnerabilities.[i] At the moment they’re not here, and they won’t be here in August, either.

However, there’s a lot happening in the SBOM world now (one good thing is CISA’s SBOM Listening Sessions, where groups gather to discuss different issues with SBOMs. They’re open to anybody, and don’t require a reservation. Today they kicked off with two sessions, and they were both very well attended. To find out the topics and times, go here and look for the listening sessions), so it doesn’t bother me (or the federal agencies, I assume) that the SBOMs are unusable at the moment.

That will change this fall, and I’m sure that, by the beginning of next year, there will be a lot of options for SBOM/VEX consumption tools, as well as guidance on using both SBOMs and VEXes for vulnerability management. In the meantime, we can all learn a lot more about SBOMs and VEXes, so that we’re ready for them when they become much more widely available than they are now.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] There will be a subscription-based service available soon that will do what Dependency-Track does, as well as other risk analysis based on SBOM data. However, I don’t have details on that service yet. I anticipate that a lot of organizations will be glad to turn the whole job of software risk management based on SBOMs over to a third party, rather than have to do it themselves.

Thursday, July 7, 2022

Two types of exploitability

When software bills of materials (SBOMs) started being regularly used for vulnerability management purposes (I don’t believe there’s a historical marker for the day that this happened, but I’d nominate the day when Steve Springett posted the first version of Dependency-Track in 2012. This was based on the concept of a BOM, but this was before the term SBOM was even coined), this entailed changes in how organizations thought about software supply chain risks – if they were even thinking about them at that time (most probably weren’t).

The big change was that there was now a distinction between a vulnerability being present in a software product and its being exploitable in the product; previously, both words would have been considered to have the same meaning. However, once an organization could learn about vulnerabilities applicable to components in their software products (and the first users of Dependency-Track were developers. In fact, the great majority of users today are probably still developers), they discovered that many vulnerabilities that were applicable to a component, when it was considered as a standalone product, were not applicable after the component had been installed in their product; in fact, this was true for the great majority of component vulnerabilities.

The word that was used to describe the difference between the two cases was exploitable. If a vulnerability is exploitable in a product, this means it is not only physically present in the product, but it can be attacked by a hacker who is up to no good. If a vulnerability is physically present in a component but can’t be successfully attacked in the product itself, this means it isn’t exploitable in the product.

Of course, the VEX format was developed for precisely this purpose: for a supplier to notify their customers that, even though a component in one of their products (which is listed in an SBOM) is noted as subject to a particular vulnerability in the National Vulnerability Database (NVD), the product itself isn’t subject to this vulnerability. Therefore, the customer shouldn’t waste time looking for the vulnerability – and BTW, they shouldn’t tie up the supplier’s help lines asking about the vulnerability.

So, one of the supplier’s jobs is to continually search the NVD (and other vulnerability databases) for vulnerabilities in components they have installed in their products. They need to determine whether or not each component vulnerability is exploitable in the product itself, usually because of how it was incorporated into the product.

When the supplier discovers that one of these vulnerabilities isn’t exploitable, they should issue a VEX document stating that fact. The tooling on the user’s end (or at a third-party service provider that performs this service for the user) should maintain a list of exploitable vulnerabilities in each software product and version they utilize, and remove from this list any non-exploitable vulnerabilities. This is important, since probably over 90% of component vulnerabilities aren’t exploitable in the full product. Having an up-to-date list of only exploitable vulnerabilities will save staff members a lot of time wasted in searching for the non-exploitable ones, as well as calling the supplier to ask when they’ll be fixed.

However, I believe this determination of exploitability almost always must be made by the supplier. The supplier, or someone that knows how the product was put together, is the only entity that can reliably state whether or not a vulnerability is exploitable in their product. They wrote all the first-party code and installed all of the components (which nowadays make up about 90% of the code in the average software product). They can make judgments like, “There is no way that an attacker could ever reach this vulnerability to exploit it.” I don’t believe that, in most cases, any other entity can reliably make such a statement, even if they can review the source code[i]. I can attest that the CISA (formerly NTIA) VEX committee has always worked under the assumption that VEX documents will almost always be issued by the supplier of the product.

Yet, this will clearly not always be the case. For example, open source projects are staffed by volunteers who consider themselves coders. Yes, they should be concerned about vulnerabilities that turn out to be exploitable in the product they’ve developed, but I think it’s too much to expect them to spend a lot of time figuring out whether vulnerabilities are not exploitable. I’m not expecting too many open source communities to put out VEX documents of their own accord.

What about commercial products? A commercial supplier should in theory want to put out VEXes regularly, since a user who learns from a VEX that a vulnerability isn’t exploitable in a product they use won’t feel they need to call the supplier’s help desk with that question. In fact, the development of the VEX format was sparked by two very large suppliers who were concerned about this problem; one estimated they’d get literally a couple thousand unnecessary calls every month if they just put out SBOMS, without also putting out VEXes.

However, I’m also sure that a lot of commercial suppliers won’t put out VEXes for their products, even if they do put out SBOMs. Perhaps they won’t understand why VEX is important, or maybe they simply won’t want to invest the time required to learn about the format. This means there will definitely be a need for third parties to develop VEXes for both commercial and open source products.

However, I also believe that the VEXes produced by these third parties will be fundamentally different from those produced by the product suppliers themselves. This is because the third parties, no matter how expert, don’t understand exactly how a product (open source or commercial) was developed.

Does this mean we’re all SOL, when it comes to securing open source products and commercial software products whose suppliers don’t feel like producing VEXes? No, because there are two kinds of exploitability. The first is the absolute kind, which is what the supplier of the product can tell you about in a VEX. Only the supplier can make a categorical up-or-down statement whether or not a vulnerability is exploitable in a product.

But it’s still possible to make statements about exploitability that don’t have the categorical quality of the supplier’s statement. A good example is this excellent blog post by Walter Haydock. It’s aimed mostly at software developers rather than end users, so it might justifiably be considered overkill for the latter, especially given the large numbers of vulnerabilities they might identify when they start receiving SBOMs for most of the software that they operate.

The post lists five recommendations for determining exploitability of a vulnerability in a software product, including the new Exploit Prediction Scoring System (EPSS) scores, which are all about exploitability. These are all good recommendations, but it’s important to realize that now, we’re talking about the second type of exploitability. It doesn’t state whether a vulnerability can be exploited at all, but the likelihood that it will be. While knowing likelihood isn’t as good as a categorical black-or-white statement that a vulnerability isn’t exploitable, it’s certainly better than having no idea about whether a vulnerability is exploitable or not.

However, the difference between the two types of exploitability can be seen best in what they can be used for. Type 1 exploitability is used to determine whether or not the organization should make any effort at all to find and mitigate a particular vulnerability. If it’s exploitable and it’s a serious vulnerability, the organization should probably move heaven and earth (well, earth, anyway) to get it patched, including hounding the supplier day and night to develop a patch. And if a vulnerability isn’t exploitable, the organization doesn’t need to do anything at all.

But Type II exploitability is a question of prioritization of effort. If Vulnerability A is deemed more exploitable than Vulnerability B, then A should take precedence in the effort. As long as the organization goes after the vulnerabilities with the highest exploitability scores first, they don’t need to feel too bad if they don’t have time to mitigate the lower-scored vulnerabilities.

Thus, both types of exploitability are important. It’s always better to know whether a vulnerability is exploitable in the Type I sense or not, but in cases (like open source software) in which the best you’ll be able to get is an estimate of Type II exploitability, then by all means, look at that. And if, for example, an open source product has a serious vulnerability that also appears to be highly exploitable, you should certainly prioritize mitigating that vulnerability over mitigating a vulnerability that is Type 1 exploitable, yet still seems to be less serious (due to CVSS score, etc.).

One question you might have (I know I have it) is whether third parties that make statements about Type II exploitability should do so using VEX documents. In general, I don’t think they should, since the VEX format as of now is based entirely on Type 1 exploitability. The third party would have to assert that the vulnerability is exploitable or not; they wouldn’t be able to state the probability.

However, I’m also not against having a third party assert that a vulnerability is exploitable or not in a product, especially if the supplier hasn’t issued a VEX for this product at all. But the third party should utilize the “authors” field (in the CycloneDX VEX format) to identify themselves; they should also fill in the “email” field under “authors”, so users can get in touch with them if they have questions. Users will need to decide for themselves whether or not to believe the third party’s statement that the vulnerability in question is either completely exploitable or not exploitable at all in the product.

P.S. Speaking of exploitability, I want to raise one important point: Lots of vulnerabilities may not be exploitable in the Type 1 sense, when in fact they could be exploited if an attack started with another vulnerability, then moved on to the original one; this is known as a “chained vulnerabilities” attack. Does the fact that a seemingly non-exploitable vulnerability could in fact be exploitable in a chained attack mean we should call it exploitable?

There’s no right or wrong answer to that question, since it depends on how you define “exploitable”. However, I do know that the VEX concept, as it was developed in the last two years, only applies to vulnerabilities that are directly exploitable – i.e. not as part of a chained attack. The thinking is that, if you allow for chained attacks, then just about every vulnerability becomes exploitable in one way or another. It would be hard to prescribe any one mitigation, given the huge number of ways a chained attack could happen. Given that another vulnerability is the source of the chained attack, that vulnerability should be mitigated on its own.

I’ll admit that some security professionals don’t agree with me on this. They think it’s better to treat any vulnerability as exploitable, even if that only occurs through a chained attack. Do you want to know how I feel about this?....I didn’t think so, but I’ll tell you anyway: When SBOMs are widely available and regularly updated, end users will learn about lots of exploitable vulnerabilities, in software products they use every day; they probably wouldn’t have learned about all of these vulnerabilities otherwise. If they can handle that workload and want to learn about the exploitability of chained vulnerabilities as well, then they should do so.

Also, current users with high assurance use cases (e.g. the military) may feel it’s important to learn about chained vulnerability attacks as well. Go for it! But let’s not require that everybody else do the same.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] There are some cases in which a third party could determine that a component vulnerability isn’t exploitable in the product itself, such as when the vulnerable module of a library has not been included with the product binaries.