Thursday, March 30, 2023

When will I be able to verify an SBOM? Probably never.


A topic that seems to come up a lot in the CISA SBOM meetings is “verification” of SBOMs. Of course, that could mean a lot of things, but this usually seems to mean that the software customer doesn’t believe their supplier can be trusted to accurately represent all the components of the software in the SBOM. For example, the supplier might not report a component that’s nine years old and is loaded with vulnerabilities, or they might list a component as version 4.5, which was released three months ago, when it’s in fact version 1.1, which was released in 2011.

Could this happen? Certainly it could. However, you need to keep in mind that, once you start receiving SBOMs on a regular basis for more than just one or two products, it’s inevitable there will be a lot of empty spaces and “NOASSERTION” statements. This is because there are so many problems with naming of components (although help seems to be on the way on this issue, I’m glad to report).

Might some of those empty spaces be the result of deliberate obfuscation by the supplier? That’s certainly possible. But, given that some large suppliers estimate that over 90% of components are either mis-identified or not identified at all in an SBOM that’s produced as part of their software build process (the most common scenario), how will you ever know if the lack of an identifier for a component is due to a deliberate act by the supplier, or just due to the normal wear and tear of the naming problem? Answer: you won’t.

But there’s an even more important reason why verifying an SBOM may never be possible: How could you ever do that, even in principle? Here’s the problem: What you find in an SBOM will vary widely due to when in the software lifecycle the SBOM is produced. One of the CISA workgroups recently finalized a document on this issue, which is awaiting final approval for publication. The document describes six SBOM Types. They’re all valid for specific use cases, but they’ll always differ from each other, sometimes radically.

In a large number of cases, the SBOM you get will be created during the final Build stage, when the software code (including components) will be “set in stone” – i.e. the code contained in the binaries delivered to you the customer is exactly the code that went into the final build. If you want to verify what the supplier did with the greatest accuracy, you will need another SBOM created at the final Build stage. However, a particular version of a product only goes through one final build. This means that, unless you can roll back time and persuade the supplier to let you produce your own SBOM during the final build of the version that you now utilize, you won’t have a completely comparable SBOM to compare with the one the supplier provided you.

If you want to produce your own SBOM and not have to time-travel, you could produce an Analyzed SBOM using a “binary analysis” tool. This is a tool that, starting with the binary files distributed to customers, attempts to decompile the supplier’s code[i] and create the SBOM using that code. Of course, this will never be a completely clean process and will usually result in more serious naming problems than occur with just a Build SBOM.

In other words, probably the only SBOM Type that will be within your power, as a customer, to produce will inevitably be substantially different from the one the supplier provided to you (unless the supplier themselves used binary analysis to produce their SBOM. In some cases, the supplier may have to do that, especially if they use older languages like C or C++. But even if they did that, the Analyzed SBOM produced by the supplier will differ a lot from the one that you produce, since they bring to it a lot of inside knowledge known only to the developer).

Of course, an SBOM produced at any stage of the software lifecycle is interesting. In fact, some people who know a lot more about this than I do (a low hurdle to clear, to be sure!) say the best SBOM is one that blends two or more of the different types. For example, the Deployed SBOM is unique among the six SBOM Types, in that it includes not just the software itself but everything that is deployed with it: the installer, a container, runtime dependencies, etc. Since every artifact that’s deployed in the user’s environment can be a source of risk, knowing what’s inside all of these items is almost as important as knowing what’s inside the software itself. On the other hand, since the Deployed SBOM depends on binary analysis, it will never provide as good a description of the software itself as the Build SBOM does. It might be best to combine the two of them, although that in itself requires a lot of skill.

I hope you get the idea: In order truly to verify an SBOM, you must have something comparable to measure it against. However, it’s not likely that, without closely cooperating with the supplier, you’ll ever be able to produce anything that’s comparable. But if verification requires cooperating closely with the supplier, that’s not exactly verification, is it?

This brings up an idea: Rather than taking an adversarial position vs. the supplier and pretending it’s possible for you to conduct an “independent” verification, you could utilize a binary analysis tool to build your own SBOM, then discuss the differences between the two SBOMs with the supplier. You both might learn something interesting from doing that.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Doing this may violate the supplier’s license agreement.

Sunday, March 26, 2023

Who will pay the $29B bill for the 2021 Texas power disaster? That’s easy: the one group that bears no responsibility.

 


An article by Robert Walton in Utility Dive this past weekend brought me back to a topic I wrote around eight posts on almost exactly two years ago: the aftermath of the Valentine’s Day 2021 almost-total collapse of the Texas power grid and the huge unresolved question of financial liability (to say nothing of moral responsibility for the resulting deaths. The official number is 246, but there’s much evidence it could have been closer to 700-800).

Regarding the financial disaster, here and here are my two most relevant posts, although this one (previous to the other two) also provides context (I also wrote about the tense moment in a control room in Round Rock, Texas in the early morning of February 15, 2021, when the Texas grid came within four minutes and 37 seconds of a “total collapse”. This might have resulted in power outages in at least some parts of that grid lasting months).

Robert’s article was about the fact that, last Friday, an appeals court ruled that “Public Utility Commission of Texas exceeded its authority in February 2021 by setting electricity prices at $9,000/MWh for four days during Winter Storm Uri” (in case you didn’t know it, winter weather events now have names, although calling this a “storm” is a stretch, since I don’t think there was an inch of snow, rain, or anything else. Of course, it makes it seem less terrible if the reader drops this story in the “hurricanes and other acts of God” mental bucket, rather than the “needless financial and human catastrophe” bucket).

Here is a brief summary of the relevant points regarding the financial catastrophe. There’s more detail in the posts I linked, as well as the news articles linked in them:

1.      Extreme cold weather hit Texas on Sunday, Valentine’s Day 2021. Since many power generation facilities in Texas were never designed to face this type of weather, a lot of them shut down. These facilities were primarily natural gas plants, although wind farms and coal plants were also affected. The problems were compounded because a lot of the gas production and transmission facilities (the wellheads and pipelines) were similarly unprotected and were shut off. Therefore, some gas plants that could have continued to produce power also shut down, because they didn’t have any fuel.

2.      As a result of this, the Texas power grid was under severe strain going into the night of February 14-15, and barely avoided a disastrous collapse that might have led to months of outages. During the morning of the 15th, the Public Utilities Commission of Texas (PUCT), the agency that oversees the Texas grid, decided that the current market price of power, $1200 per megawatt/hour (MWH), clearly wasn’t adequate, since so many power production facilities were still down. Therefore, they decided that the only way to quickly get more power supply was to raise the price substantially above $1200. Hopefully (and that’s all this decision was based on – hope), this would induce more supply, which would gradually bring the market price down. For perspective, the normal wholesale market price in Texas is around $30/MWH.

3.      As I described in this post, PUCT decided on Monday, February 15 not to take any chances; they not only permitted the price to go up, but they pegged the market price at $9000 per MWH, meaning literally that no generator could sell for less than that price. Of course, this in itself didn’t increase supply much if at all, since supply was constrained by physical factors, not economic ones.

4.      The next day (Tuesday), the spot market price did start to decline, due to gradually improving weather and Herculean efforts to get more plants back online. However, the wholesale power price remained at $9,000/MWH because ERCOT, the grid operator for most of Texas, didn’t lower it until Friday. That was four days after the increase was imposed.

The ruling last Friday just applies to the 33 hours between the time on Thursday Feb. 18 when the market price returned to normal and the time on Friday when ERCOT finally removed the $9,000/MWH price peg. The excess charges during that time were about $16 billion. Vistra Energy, a large power producer based in Texas that got caught on the short end of this problem and estimates they lost $1.6 billion, had sued the PUCT. Of course, the ruling will be appealed, so this matter is nowhere near settled.

However, in addition to the $16 billion, there were $13 billion in excess power charges during the three days between when the PUCT pegged the price at $9,000 and the time on Thursday when the market price hit $30. If the PUCT had just allowed the price actually paid (referred to as the “settlement price”) to rise but hadn’t pegged it, it would have declined with the market price during that whole period. It seems there isn’t any real question that the PUCT would be legally on the hook for the $13 billion, but they should certainly be morally on the hook.

More importantly, it was ERCOT that decided to keep the PUCT’s price peg in effect for four days, when they could have removed it as the market price started to decline (which showed that the peg was excessive). Thus, ERCOT bears as much blame for the entire $29 billion as the PUCT, and perhaps more (of course, financial liability is a very different question. I have no idea if either the PUCT or ERCOT could be held financially liable for anything, although I assume Vistra’s lawsuit would have been thrown out if the PUCT couldn’t be sued).

The bottom line is that somebody was unjustly deprived of $29 billion during Valentine’s Day week 2021 in Texas. However, as described in my posts, there’s no one organization or group of organizations that you can point to as clearly responsible. This means that in the end, it’s almost certain that the people who end up paying the bill will be the taxpayers and ratepayers of Texas – the very people that clearly bear no responsibility at all for what happened.

Would you like to know who I blame for this?....I didn’t think so, but I’ll tell you anyway. Of course, I blame the PUCT and ERCOT, but I also blame the politicians and grid operators who had known for years that the Texas grid was unprepared for a severe cold weather incident, but did very little about it.

Even more, I blame the people in Texas who decided during the 1920s and 1930s - when the US power system, which had previously been just a collection of power “islands” but was now linking up into a real “grid” – that they didn’t want to join that trend. They made that decision because becoming part of the emerging national grid would have required Texas utilities and other entities to be regulated by the new Federal Power Commission (now the Federal Energy Regulatory Commission or FERC) and other federal agencies. I also blame the people who over the years have decided repeatedly to leave Texas’ isolation in place.

Had the Valentine’s Day weather event happened anywhere else in the US (or in North America in general other than the province of Quebec, which is similarly isolated from the overall grid), the power deficit in the affected area would have literally instantly (without any human intervention being needed) drawn in power from neighboring areas, which then might have drawn power from further-away areas, etc. There might have been localized outages in a wide area, but it’s unlikely there would have been such a huge outage anywhere).

However, two years after the incident, I know of no serious discussion about joining Texas (or more specifically ERCOT, which doesn’t cover parts of Eastern Texas and far Western Texas, including El Paso) to the rest of the US grid. Get ready for this to happen again!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Friday, March 24, 2023

Three problems holding back SBOMs


One of my favorite sayings is, “I didn’t have time to write you a short letter, so I wrote you a long letter.” In other words, when you constrain yourself to saying a lot in a short document, you’ll almost always produce something that’s much more worth reading (and not just because it’s short) than if you address the same subject in a long document. As a blogger who is fast approaching 1,000 posts since early 2013 (and who posted earlier on a Honeywell blog), I can vouch that this is true. My shorter posts are almost always more memorable than my longer ones.

When Deborah Radcliff, a noted cybersecurity author and speaker, asked to interview me in a podcast, I suggested that I would address the distribution and use of software bills of materials (SBOMs) – specifically, the reasons why, even though an Executive Order mandating that federal agencies request SBOMs from their software suppliers went into effect last summer, they’re still hardly being distributed to, or utilized by, user organizations whose primary business isn’t software development.

We taped the podcast, but afterwards she told me – and I agreed with her – that what I’d said was too “rambling” to make into a focused podcast. She proposed to work with me to write a document of only 1200 words (the limit for one or two organizations with which she works), on the same topic. I readily agreed to that, since – believe it or not – I have learned over the years how to write a short, cogent post.

Here is the post we wrote. I think it states very well what I was trying to say and I’m sure that, if I had twice as much space available to me, I would have written a document that was only half as cogent. I’d love to see your comments.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Friday, March 17, 2023

What would you like to do with/to the NVD?


Almost anybody who has been involved with software vulnerabilities in any way (even hackers!) has a love/hate relationship with the National Vulnerability Database (NVD). On the plus side, it’s by far the largest and best-supported vulnerability database in the world. But on the minus side, there are many problems that make it hard to use the NVD, and in many cases make it impossible to find vulnerability information which almost certainly is in the database somewhere or other.

A little less than a year ago, I convened an informal group of “SBOM industry” leaders to discuss why it is that SBOMs are grossly underperforming, at least when it comes to distribution to and use by organizations whose primary business isn’t software development.[i] The goal of the group was not just to discuss those issues, but to figure out how they can be addressed, and do what we can to set them on the road to being resolved. We call ourselves the SBOM Forum, and we meet weekly on Zoom.

We decided that, while there are a lot of issues that are inhibiting SBOM distribution and use, we would focus on the show-stopper issues; I personally think there are no more than three or four of these. We didn’t have a formal discussion of which issue we would address first, but within two meetings we had found it: the naming problem.

However, even we weren’t stupid enough to try to take on the entire naming problem, which has many aspects and is found to some degree in every software or vulnerability database in the world. We focused right away on the Big Daddy of vulnerability databases, which was the one we all had experience with. The NVD uses “CPE names” to identify products, and those are the source of a lot of problems; we described those problems in pages 4-6 of the proposal we published on the OWASP site last September. Our proposal described how to fix (or at least greatly remediate) the problems with CPE, although this required involvement of a few other federal government and private sector organizations.

That proposal was meant to address the bulk of the naming problems in the NVD, and we’re hoping it can be completely implemented in 2-3 years (which of course is close to light speed when you’re dealing with the federal government). The appropriate agencies in the federal government started considering our proposal, and we were fairly sure it was on the road to implementation in our time frame.

After publishing our proposal, we had discussions on other topics and were settling in on VEX as our next topic. In my opinion, VEX and the naming problem are the two biggest show-stopper problems preventing SBOMs from being distributed and used by non-developers.

However, recently we became aware of a reason why implementation of our proposal might be delayed significantly longer than three years. We had a meeting to discuss this problem. While we received some assurances then that our immediate fears might be overblown, we ended up having a more wide-ranging discussion of the NVD, at which other issues came up. At the end of that hour-long meeting, we decided we wanted to focus on the NVD itself next, and not limit ourselves to discussing just the naming problem within the NVD.

We have representatives of some very big software and intelligent device suppliers in the SBOM Forum (as well as a number of smaller tool vendors and a few consulting firms. We only have a few end user organizations and we’d like to have more), who were surprised to hear what was said about some NVD problems that have nothing to do with naming. They wanted to hear about all the problems the other members of our group had run into.

Even more importantly, we started to have a discussion about what the NVD could be if it were allowed to move out of the narrow box it finds itself in now. For example, given that people all over the world use the NVD, yet the entire physical infrastructure is housed in the US, what might happen if the NVD could place infrastructure (perhaps through content delivery networks) on other continents – while at the same time getting support from private and public sector organizations on those other continents?

Note that I don’t for a minute blame any individuals, or even government agencies, for the NVD’s problems. Any organization that’s grown very rapidly, yet has to fulfill the obligations of being a government-controlled entity, will probably find itself in a similar box sooner or later. In fact, there’s a great example of a similar organization that was incubated in the NTIA, the same federal agency that “incubated” the Software Component Transparency Initiative, also known as “Allan’s Army”. That organization found itself in an overly box much quicker than the NVD has, and now it’s a very effective private sector organization, that gets some help from governments.

Has anyone heard of DNS? Let me put that another way: Is there anyone who uses DNS fewer than perhaps 5,000 times a day (almost always without even thinking about it, of course)? Our lives would be very different if, instead of being able to find any web site we want through a single DNS query, we had to first obtain from the operators of the site (perhaps by calling them – do you remember phone calls?) their 31-hex character IPv6 address, then enter it by hand in our browser. And woe betide you if you got one of those characters wrong; you’d have to re-enter it until you did it perfectly.

Without going into a lot of detail, the NTIA saved you from that fate by picking up an idea developed by an academic named Paul Mockapetris and turning it into a real service. In fact, NTIA itself was the first domain name registrar. But, as you can imagine, business grew very rapidly, and since the NTIA (and the federal government in general) doesn’t want to go into business doing something the private sector could probably do better and certainly less expensively, they looked for a private sector organization to take over this role.

The NTIA first made a false start when they chose a network consulting firm to handle domain registrations. After a couple years of performing well, they one day decided it would be a great idea to email organizations that requested domain names to see if they’d like some of their other services; that email set off a firestorm, and the NTIA looked for a different organization to take over domain registrations. Finally, they turned the business over to the Internet Assigned Numbers Authority (IANA), which remains in charge of assigning domain names to this day.[ii]

Our group is now in the process of enumerating both problems with the NVD as it exists today and opportunities it could have in the future, whether or not it remains a part of the US government and whether or not it retains the NVD name. We have some ideas already, but we’re looking for others. If you have anything you would like to contribute to this discussion, either with a comment or by suggesting a text edit or addition, please go here. You can contribute either using your email address or anonymously. We would prefer the former, but we want most to hear what you have to say, no matter how you say it.

Once we have our list of problems and opportunities together, we’ll make that publicly available. We’ll also start discussing how those problems and opportunities can be addressed, both in the short term and the long term. You’ll be welcome to participate in that as well.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] The good news is that SBOMs have had tremendous success in being adopted by the software developer community, in no small part due to the efforts of Dr. Allan Friedman and the NTIA. Developers have come to realize that they need to take much more responsibility for assuring the security of the components they include in their products, and it’s literally impossible to do that without producing SBOMs at every point in the development process. 

However, the bad news is that developers are almost never distributing SBOMs to their customers (at least not regularly. A new SBOM should be distributed with every major or minor version, since the previous one becomes almost completely invalid at that point), meaning their customers aren’t able to make use of them for their own software risk management purposes. However, the suppliers generally can’t be blamed for this situation; the fact is that the users aren’t asking for them. 

[ii] Given what a success story this is, wouldn’t you think the NTIA might brag a little about it on their web site? I would too, but I can’t find it anywhere.

Sunday, March 12, 2023

Conversation with Dale Peterson on liability vs. regulation


After I put my most recent post on LinkedIn, noted ICS security guru (and S4 founder) Dale Peterson made a good comment, which turned into a chain of comments and replies between the two of us. Because I think the conversation was valuable, I’m reproducing most of it (along with commentary of my own in italics) here.

1.      I had stated in my first post on this topic that I basically agree with the statement on page 20 of the White House cyber strategy document, which strongly implies that suppliers shouldn’t be able to “fully disclaim liability”. Or at least, I agree that any such disclaimer shouldn’t be buried in a 100-page agreement the user has to sign before they can use the software at all. However, if the supplier feels they can’t even offer the product to the public unless they disclaim or limit liability and they provide a separate agreement just addressing liability, I think they should be able to do that. If the user doesn’t want to sign away liability even if that means they can’t use the product, they’ll be free to do that. At the same time, the supplier will need to refund any money the user has already paid, if they won’t sign the separate liability provision.

2.      Soon after the second post (my most recent one) appeared, Dale made a general comment and I replied, “What's amazing is that the author of this section thinks it's even possible to ‘shift liability’. In our legal system, liability is established by a court case. The government doesn't have any power to decide liability beforehand.”

3.      Dale replied to my comment that, “I wouldn't go that far. Congress can pass laws that shift liability. The much in the news Section 230 is a great example of this. (It’s) Less clear that the Executive Branch can shift liability, except perhaps via DoJ bringing cases to court.”

4.      I replied to Dale, “I’m sure you know better than I do, but it seems to me that liability is always a question of the circumstances of each case. Here, we're talking about cyber breaches. Can even Congress pass a law that says the supplier is the one liable in every breach, or at least that the default assumption is that they're liable?” I added, “I'm sure the executive branch can't do that, although I know a lot of presidents have tried.”

5.      Dale replied, “Channeling my Schoolhouse Rock ... Congress passes the bills, Executive signs them, Judiciary judges them. Section 230 is a good example. Congress passed and Executive signed the Communications Decency Act. Section 230 provides immunity to liability for online computer services with respect to third-party content generated by its users. Now there are cases that have made it all the way up to the Supreme Court saying Congress can't free them from this liability.
Would it be wise for Congress to pass something that will be struck down by the courts? Of course not. I think your points in the article were quite good and hadn't really been discussed post Strategy announcement.”

6.      I replied, “Thanks, Dale. However, in this case the WH seems to be assigning liability to the software suppliers, not removing it. That strikes me as something almost like a bill of attainder, which was explicitly prohibited in the Constitution. The suppliers can be held liable now, of course (although it’s important to remove those buried disclaimers of liability in the usage agreements we all sign without reading, or at least declare those unenforceable), but there needs to be a trial to determine that fact. The WH seems to want to skip the trial altogether, and proceed right to the sentence.”

7.      Dale replied, “A lot of what is in the strategy requires legislative action and the Biden administration is aware of this. You see this in the text and in the briefings they have been giving. Even a lot of the regulatory things they want to do will require Congress to give the Executive branch more regulatory power.”

8.      I replied to Dale, “There's a difference between regulations and liability. Absolutely, Congress can give CISA (since that's the agency that's behind this section, I'm sure) regulatory authority. But Jen Easterly has said that CISA isn't a regulator - and I think that's a good position for them to take.
However, it seems like CISA is trying to develop a back door for themselves by saying the suppliers are by default liable for breaches, and then hoping this will scare them into good practices. The problem with this approach is that, instead of reasonable regulatory fines, now the suppliers will be subject to potentially huge damage awards for large breaches.
I'm not very worried that this will happen, of course, since it will have a huge inhibitory effect on software suppliers and Congress will never approve it. So, if CISA wants to use the coercive approach, they should just become regulators and be done with it.
But since CISA doesn't want to be a regulator, they shouldn't then turn around and be an executioner. They should figure out positive incentives for suppliers - but also for users, since they're as often the cause of breaches as the suppliers are.”

9.      The next day, I made another comment, “I do want to add that I don't object to a supplier having to pay a huge damages award, if they're found in a trial to be liable for an especially serious breach. What I do object to is the government's putting their thumb on the scales of justice, so that the liability will be determined without a trial at all. That's the ‘solution’ adopted by the cowboys in my post.”

That’s where the conversation ended. I think Dale and I are in agreement on all points, to wit:

A.     Whatever the WH wants to do in this section, it will require Congressional action. And since I think it would be hard to get Congress to name a post office after George Washington nowadays, I’d say the chance of such action is effectively nil. So this is currently a moot point.

B.     There’s a difference between regulation and assigning liability. Neither of us thinks regulation should be ruled out, but (and this is my opinion, since I haven’t discussed it with Dale) I think the only type of regulation that makes sense, when it comes to cybersecurity, is risk based: i.e., a requirement that the entity should “identify and assess” the risks they face regarding a particular domain like supply chain security, then develop a plan to mitigate the most serious risks – and follow that plan.

       Since 2013, I’ve probably written at least 4-500 posts on the NERC CIP cybersecurity requirements, which used to be oppressively prescriptive but now are entirely risk based (or at least the new requirements are risk based. Unfortunately, the prescriptive requirements are still mostly on the books). For example, CIP-013, the supply chain cybersecurity risk management standard, is entirely risk based (perhaps to a fault, since it provides far too little guidance on what it means to comply with it, IMO).

C.      But assigning liability up front, so that the supplier is assumed to be liable (and perhaps for a lot of money), unless they can prove they’re not liable, is pure overreach, and frankly Orwellian to boot.

Last August, I wrote a post that described another bit of Executive Branch overreach, in that case rolling out some “voluntary” cybersecurity requirements for critical infrastructure, while at the same time making it perfectly clear that critical infrastructure operators had no choice but to follow those requirements (of course, since there are hundreds of ways that the federal government touches any private organization every day, this is without doubt a threat to make life difficult for them if they didn’t do what they were being told to do). I started by saying,

Bad things happen when government agencies try to take the easy route, rather than the right one, to achieve their goals. And if the agency is trying to get private organizations to do something that they just know is the right thing for them to do, they’re even more tempted than normally to take the easy route. After all, their goals are righteous! How can anybody complain if they’re just doing everything they can to achieve those goals?

This paragraph applies perfectly to what Dale and I discussed on LinkedIn.

My former economics professor Milton Friedman, when the Ford administration had just put in place wage and price controls, said in class, “Now we have ‘voluntary’ wage and price controls – meaning, ‘You’ll voluntarily do this or you’ll get your head cut off.’”

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Sunday, March 5, 2023

Let’s skip the trial and go straight to the sentence!


There’s been a lot of discussion of my most recent post on LinkedIn, as well as of a similar post by Jeff Williams of Contrast Security. Jeff is best known as the originator of the OWASP Top 10, but is now known to me as a player on a men’s over-50 basketball team that has in the past won the national championship and has also beaten the Mexican national team (although they didn’t win this year). Who would’ve thought a basketball star would have a sideline in software security?

But I digress. In the process of replying to the comments on my post (and replying to my replies), I’m now able to articulate my position on this issue more clearly:

1.      The White House is understandably concerned about the increasing volume of software supply chain cyber attacks, many of which are due to an attacker exploiting a vulnerability found in a software product used by the organization attacked.

2.      What remedy does the White House propose for this problem? In the section entitled “Strategic Objective 3.3…” of the White House cybersecurity strategy document released last Thursday, the remarkable statement is found, “We must begin to shift liability onto those entities that fail to take reasonable precautions to secure their software…” Of course, this section makes it clear that “failure to take reasonable precautions” is a fault found only in the supplier of software, not the consumer of the software, not the maker of a security tool that didn’t detect the vulnerability that caused a breach, etc. This section says nothing about apportioning liability, or about anything other than assigning 100% liability to the supplier.

3.      However, suppose that a software developer followed poor development practices and included in their product (during development) either a certain sequence of code that was known at the time to constitute a vulnerability with some non-zero likelihood of being exploited, or a third-party component that contained code that constituted such a vulnerability.

4.      Moreover, suppose that, due to ignorance or deliberate negligence, the developer didn’t patch this vulnerability (or replace the vulnerable code or component) before they shipped the product to end users. Whether or not the developer knew about the vulnerability, they should have identified it in testing before they shipped the product. After all, it was a known vulnerability. I would consider this fact to be clear evidence of negligence on the developer’s part, although that’s not the same thing as liability, of course.

5.      Now, let’s suppose that, some years after the supplier shipped this vulnerable product to customers, one of those customers (“Customer A”) was breached because the vulnerability in question had been exploited by a hacker. The breach caused serious damage to Customer A, perhaps because the attacker used the breach to plant ransomware in the customer’s network.

6.      Sometime after they were breached, Customer A filed a lawsuit against the developer, saying their poor development practices were the cause of the breach; therefore, liability for the consequences of the breach (perhaps a big disruption to their business, caused by the ransomware incident) rests 100% with the developer. They point to this section of the 2023 White House cybersecurity strategy document in support of their assertion that the developer is 100% liable. Sounds like an open-and-shut case, right?

7.      If I were the judge in this case (say it was a bench trial), would I rule that the developer was indeed 100% liable for the damages suffered by Customer A? I would probably do that, as long as no convincing evidence to the contrary were presented by the developer.

8.      However, suppose that the developer defends itself, because they don’t believe they’re liable at all (of course, if they thought they were liable, they would presumably have settled the lawsuit before it ever went to trial). They admit they erred in not doing adequate testing for vulnerabilities before they shipped the version of the product that contained the vulnerability (i.e. the version that Customer A purchased).

9.      However, they point out that, a year after they shipped the product to Customer A, they discovered the vulnerability in the product and immediately produced a patch for it. They notified all their customers that the patch was available, including Customer A.

10.   Customer A never applied the patch that addressed this vulnerability. In fact, they didn’t apply any patches to the product for the next four years. Since the developer produces what are called “cumulative” patches – which include the contents of previous patches – all Customer A would have had to do, in order to be protected against the vulnerability that ultimately brought them low, was to apply any one of the patches released in those four years. But they never applied a single one.

11.   Of course, four years after the developer discovered the vulnerability and issued a patch for it, the customer’s instance of the product was breached, and the rest is history. The developer, since the customer is suing them for a lot of money, has developed evidence showing that many of their other customers, who had applied the patch, were also attacked by the same ransomware group that breached Customer A. However, none of those other customers were breached, because the vulnerability was patched in their instances of the product.

12.   Given this evidence, would I, the judge, now a) dismiss the lawsuit with prejudice (meaning the suit couldn’t be refiled), since in my mind, the developer is clearly not liable at all (perhaps I would also require the plaintiff to pay all the defendant’s legal costs)? Or would I b) find for the plaintiff, because in my mind none of the evidence the developer introduced matters – i.e., the developer is in fact 100% liable (perhaps the strategy document from 2023 would influence my judgment)? Or would I c) dismiss the lawsuit without prejudice, meaning that the plaintiff could still refile the suit if they discovered stronger evidence of the developer’s liability?

13.   Frankly, any of these three outcomes is possible, and there are plenty of other possible outcomes as well. The point is that it would be quite unusual if the case were so open-and-shut that the suit would either be completely successful (option b) or a complete washout (option a).

In fact, if assignment of liability were as clear-cut as the White House strategy document makes it seem, there would never be a need for any trial regarding liability, at least in software breach cases. All Customer A would have to do is file some sort of notice that they use the developer’s software and they were breached. At that point, the developer wouldn’t be able to do anything other than open their checkbook and ask how big a check they need to write.[i]

In ohter words, it now seems the White House (and perhaps DHS) thinks there’s no need at all for court cases to determine liability for software breaches. As far as they’re concerned, the outcome of these cases is determined from the start (actually, from the date of the strategy document).

This is like the story of two cowboys that captured a cattle rustler and were starting to hang him. One of the cowboys had a sudden attack of conscience and asked, “Shouldn’t we give this man a fair trial?” The other cowboy exclaimed, “You’re right! We’ll give this man a fair trial…then we’ll hang him.”

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] If you pointed out here that the strategy document says the supplier might be given “safe harbor” because they follow secure development practices, I would agree with you that it says that. However, there would still be a legal case, although this one would be over whether or not the developer qualified for safe harbor. Just the fact that they shipped a product with a vulnerability isn’t prima facie evidence that they don’t follow secure development practices. After all, following secure practices doesn’t make the supplier perfect; it just reduces the likelihood that they’ll make a mistake like the one this supplier made. In other words, a lawsuit over liability for the breach would be replaced with a lawsuit over safe harbor. Customer A would be no more likely to win the safe harbor lawsuit than they would the lawsuit described above.

Friday, March 3, 2023

SOMEONE needs to be liable for breaches. We’ve decided it’s you, Mr/Ms Software Supplier!

The White House published the National Cybersecurity Strategy on Thursday and it understandably received a lot of attention. I think it’s an excellent document and very well thought out (I especially liked the statement that the government will “promote the further development of SBOMs”). However, there’s one section I find to be…well, let’s say I find it lacking in sound reasoning. That’s the one on pages 24 and 25 entitled “Strategic Objective 3.3: Shift Liability for Insecure Software Products and Services”. The contents of this section were essentially given a road tryout by Jen Easterly, Director of CISA, in a speech at Carnegie-Mellon University on Monday; the speech was reported on in this article in the Washington Post on Wednesday.

To be honest, I don’t even object to all the language in this section. For example, I completely agree that software suppliers should not be able to disclaim all liability for defects in those 100-page “contracts” that nobody reads but everybody has to sign, if they want to use software at all.

However, what I do object to is the implication in that section (as well as in Ms. Easterly’s comments on Monday) that when there’s a cyber breach, the default assumption will always be that it’s the software developer’s fault. This is because many developers “ignore best practices for secure development”, “ship products with…known vulnerabilities”, or “integrate third-party software of unvetted or unknown provenance”.

The best articulation of this attitude is found in this WaPo article published after the strategy was released, which says a senior administration official (who of course needs to remain anonymous, since otherwise they’ll be punished for telling the truth) “told reporters Wednesday that the proposal would be to place liability ‘where it would do the most good,’ primarily ‘the company that is building and selling the software.’”

Let me rephrase this to make clearer the meaning of this remarkable statement: "We’re not concerned with determining where to place liability in the sense of 'Whose fault is this?'; that’s so 20th Century. Instead, what we’re really concerned with is ‘Who will be most susceptible to being pressured to take the fall for the breach, since we’ve already said up front that they’re the culprits?’" I commend this candor, although it would have been even better if the official had identified his or herself.

Of course, I certainly don’t deny that there are developers who violate best practices. But it surprised me that, given that whoever wrote this section of the strategy was clearly quite concerned about assigning liability for future cyber breaches, there was no mention of other entities that might share liability. First and foremost, how about users that aren’t applying patches, configuring their firewalls properly, or investing a nickel in training their security staff (which may consist of the president’s teenage son, who comes over when he’s done with soccer practice)?

But let’s not stop there. There was also no mention of cloud providers who make their customers be responsible for their own security, even though many of them clearly don’t understand how security works in the cloud (here’s Exhibit A in that regard). Or of cloud providers who gladly allow new, untested companies to sell access to apps on their cloud without paying too much attention to pesky little questions like...you know...do these guys, who were peddling mortgages a year ago, have the slightest idea how to do this securely?

Finally, there was no mention of a national government (I won’t tell you which national government I’m talking about, but it’s not Estonia’s) that invested huge amounts in Project Einstein, the 21st Century version of the Maginot Line. Project Einstein was designed to secure the US from foreign cyberattacks. And to that government’s credit, it succeeded in protecting the US from Russian cyberattacks about as effectively as the Maginot Line protected France from the German armies in 1940 – that is, not at all.

Undeterred by press releases, the Russians bypassed Project Einstein by setting up all the servers they needed to carry out their tremendously well-executed attack on SolarWinds at US-based cloud providers, all completely within the borders of the good old US of A! At least the French can use the Maginot Line fortifications as a tourist attraction today, which is more than we can say for Project Einstein.

I could go on, but you get the idea: The liability for almost any cyber breach can be traced to thousands of clueless individuals in all walks of life. If you wanted to assign liability properly, you’d have to trace down all these individuals and spend a year or two figuring out exactly how much of the bill each of those parties is responsible for. Then, you’d have to get each of them to pay their fair share.

But it’s so much easier if you just say the software developer is responsible. That way, you can be home in time for dinner with the family.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.