Tuesday, February 27, 2024

What a woman named EULA has to say about the software liability question

For close to a year, there has been a lot of discussion of liability for software suppliers. This came about after the White House put out a document and talked with reporters about the need to “…place liability ‘where it would do the most good’. This meant primarily ‘the company that is building and selling the software’”. I’ve written 5 or 6 posts on the question; the one I like best is here.

However, after I wrote these posts, the White House (and DHS) didn’t drop everything they were working on and immediately adopt my opinion in full; I was amazed that didn’t happen. This just goes to show there’s no accounting for tastes…

The WH (and a lot of other people, unfortunately) seems to believe that software developers are currently getting away with grand larceny, since their malfeasance is the root of many (and really most) software breaches. Moreover, today there’s no way that those evil developers can be held liable for those breaches.

I want to discuss the lack of liability first. I honestly don’t know what this means. As long as one party has standing to sue another in court for any sort of damages, there is liability. If anyone knows of a state in which it’s forbidden for a software customer to sue the developer of the software – or knows of some obscure provision of federal law with the same effect – please let me know. However, this simply isn’t the case.

On the other hand, there is one thing currently placing a big obstacle in the path of small software users who wish to sue a software developer for a hack caused by alleged insecure development practices: the EULA (End User License Agreement). Of course, we all approve the EULA without reading it, whenever we want to use a new software product or online service.

I don’t think EULAs should be banned altogether, but it should be illegal to include a blanket disclaimer of liability for all software flaws – although perhaps a few test cases in court could establish this principle without requiring legislation (BTW, since bigger companies can negotiate their procurement contract terms with their suppliers directly, I think the government shouldn’t intervene on their behalf. But since small organizations and individuals have almost no bargaining power - and it might violate antitrust law if they banded together to force terms on a developer - there needs to be some intervention by governments to protect end users from these one-sided “agreements”).

However, in the various articles declaiming the damages caused by software suppliers, I have never seen any passage that points to EULAs as the heart of the software liability “problem”. One statement they almost all seem to include is that there needs to be some sort of “safe harbor” for software developers who follow “good” software development practices. In other words, if a software developer is sued, their only possible defense is to point out that they followed the practices laid out in some document like the NIST Secure Software Development Framework (or SSDF for short).

Sounds simple, right? Just take a look at the SSDF; then imagine the pressure you’d be under if you were a supplier in a court of law and the only way you could avoid bankruptcy because of a lawsuit from an aggrieved customer was to somehow “prove” that you had followed the SSDF. Here’s just one of the many Practices required by the SSDF:

Implement Roles and Responsibilities (PO.2): Ensure that everyone inside and outside of the organization involved in the SDLC is prepared to perform their SDLC-related roles and responsibilities throughout the SDLC (SDLC stands for “Secure Software Development Lifecycle”).

Think of all the documentation you would need to gather to prove you had followed this Practice. For a large percentage of the employees in your organization, you would have to show a) they understood their SDLC-related role(s) and responsibilities and b) they were “prepared to perform” them throughout the SDLC lifecycle (for most employees, this probably means during every moment they were employed by the software developer).

What if the supplier couldn’t find some of the required documentation, especially for employees who left the company long ago? Would they be SOL? Even more importantly, the jury (probably not made up of highly educated software engineers) would need to determine where the line falls between the supplier’s complying and not complying with this Practice. Where will they draw that line?

Moreover, these questions will come up for every Practice and Task in the SSDF. The supplier’s lawyers will have to argue each one of those in front of the jury, against the lawyers for the plaintiff. Then the jury will need to deliberate on each of them. Even more importantly, they will have to make a decision, for each Practice and Task, whether the supplier “complied” with it.

Finally, the jury will have to look at the list of SSDF Practices and Tasks that the supplier “complied with” and compare that with the list they didn’t “comply with”, then come up with a single up-or-down decision on whether or not the supplier “followed” the SSDF. If they decide the supplier did follow it, they will be allowed to continue in business (minus a big chunk of change for the legal defense). But if the jury decides the supplier didn’t follow the SSDF, they might just head down the hall to Bankruptcy Court and save a trip back to the courthouse.

Does anyone think this very complex “safe harbor” is practical? If not, what if someone creates a “simple” standard, in place of the SSDF, that just listed maybe five steps the supplier needs to follow to receive safe harbor? And suppose each of these is a “no-brainer” - e.g., “You must lock the door of your office before going home every night”. That would make it a cinch for every supplier who was sued to receive safe harbor. But is that fair to the plaintiffs? No matter how strong a case they might have, there would be no possibility of ever holding the supplier liable for the damages they caused.

Even more importantly, why should the burden be on the supplier to prove they developed the software using safe practices? Even if they didn’t follow a few of the SSDF Practices, shouldn’t there also be a burden on the customer (the plaintiff) to prove they followed safe practices in using the software?

For example, what if the supplier diligently provided a patch for every important vulnerability they identified in their software over the ten years that the customer used the software – yet the customer never applied a single one of those patches? Moreover, what if the customer’s loss was caused by a hacker who exploited one of those important vulnerabilities? Is that irrelevant to the question of whether the supplier is liable for the breach? If the only consideration preventing the supplier from being held liable is whether they followed the SSDF to the letter of the law – as it would be in this case – the answer is Yes. It doesn’t matter what the customer did or didn’t do; the breach is the supplier’s fault.

The only good aspect of the proposal to automatically hold a supplier liable for a customer breach, unless they can prove they’ve followed the SSDF or a similar framework, is there’s no way it will ever be implemented; it is clearly unfair to the suppliers. Liability for any organization, whether they’re a software supplier, a liquor distributor or a truck driving school, can only be determined in a court of law. Furthermore, it needs to be determined by a judge or jury that considers all the evidence, not just whether the supplier has followed every provision in a document like the SSDF, which is mostly incomprehensible to both judge and jury.

Of course, this means that cases dealing with software liability may take a long time to be adjudicated. But there is one way this process could be expedited without trampling on fundamental rights: The judge or jury should not be required to make an up-or-down decision on the supplier’s liability based on their “compliance” with a single document. Instead, there could be questionnaires available for judges and juries, which could be used in cases where liability for software defects is in question. The questions would be about recommended cybersecurity practices for both suppliers and their customers; the judge or jury could decide whether they were needed in any particular case.

Both parties would be required to answer the questions that apply to them. For example, the supplier might be asked how quickly (if ever) they issued patches for serious vulnerabilities identified in their products. And the customer might be asked how quickly they applied each patch to the supplier’s product in their environment (of course, there would need to be at least some evidence to back up both the supplier’s and the customer’s answers).

Who should develop these guidelines? Maybe NIST. Or maybe organizations outside of government, like the Linux Foundation or OWASP. Any party wishing to participate in drawing up the guidelines should be allowed to do so.

You may wonder (I certainly have) why this idea – that software developers should be assumed to be liable unless they can prove they complied with a set of ambiguous safe harbor provisions – was ever taken seriously. I think it is because some people in government get frustrated by the fact that it takes so long for cases dealing with issues like software liability to be decided by the court system. Surely, they think, there must be a more expeditious way of settling these disputes! And if we need to cut one or two corners with the legal system, isn’t that a small price to pay for resolving these cases?

Those people are right that there is a more expeditious way of resolving cases involving software liability – it’s called regulation. If you’re concerned that software suppliers aren’t following secure development practices and are therefore putting their customers at risk, then draft a law that requires that suppliers follow the SSDF religiously and get fined for any violations.

Of course, you’ll need to get that law passed by Congress. Currently, it’s hard to see Congress agreeing to name a post office after George Washington, let alone to extend government regulations into a brand new area (plus probably to create a new agency to enforce the law, since there is currently no federal agency that I know of, other than the FDA, that has regulatory jurisdiction over suppliers of products consumed by the private sector – other than for safety considerations[i]).

Creating regulations is hard, and it’s probably impossible in the current political environment. But that doesn’t excuse throwing away time-tested legal concepts like determination of liability in a court of law and equal treatment of opposing parties in court cases. If it’s a choice between waiting for a slow judicial system and taking an end run around that system and singling out a particular class of defendants as liable before a trial even takes place, I’ll take the former any day.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For context, see this post.


[i] This excludes the nuclear power industry and the military. In both of these “industries” there are regulations that apply to suppliers for cybersecurity and other areas of risk.

Thursday, February 22, 2024

It’s time to give China the pirate state treatment it deserves


I don’t easily get aggravated over what I see in the news, since this would leave me in an almost constant state of aggravation. In fact, I don’t easily get aggravated by news of cyberattacks on US government agencies that are conducted by other governments, since a lot of that has to be assigned to the “We all do it” bucket – meaning all governments spy on each other. The reaction in most of these cases should be to focus on building our defenses as high as possible, which we’re doing constantly.

However this article from the New York Times this morning did aggravate me, for three reasons. First, the article made clear that the attacks described are being solicited directly by the highest levels of the Chinese government, including Xi Jingping. Second, the attacks aren’t just against the usual government and military targets (since I assume the US government is launching them all the time against similar targets abroad. At least, I hope they are). They’re equally against civilian infrastructure and private businesses.

What I found most aggravating about the article is that China seems to have deliberately given private organizations the green light to launch whatever attacks they want against any targets, public or private, in the US (and in other countries, of course). Moreover, the government encourages these attacks by paying those organizations after the fact, when they produce data or other results that the government considers particularly interesting (or which might help Chinese companies get a leg up on their competition in the US).

It's as if the Chinese government is saying, “Look, the US is still our friend in theory, and we have lots of trade and other relations with them. But we’re fine if you steal whatever you can from them, and from both public and private entitiets. Even if it mainly benefits private Chinese businesses - in fact, especially if it does! They may complain a little, but they’ll be beating a path to our door again real soon.”

There’s a name for a government that acts this way: “pirate state”. That’s a government that encourages its most criminal elements to raid and plunder the citizens of states with which the government is not officially at war. In 2021, I said we need to start treating Russia like the pirate state it is. I’m not sure we did anything to further that goal, but in February 2022, Uncle Vlad made it very easy for us to treat them that way when he invaded Ukraine. Of course, now they’re under lots of sanctions (although evidently not enough), and the US is going to add more soon.

Unfortunately, China is turning more and more into a pirate state themselves. Before they completely go off the deep end and launch massive cyberattacks on the US (perhaps accompanying an invasion of Taiwan), we need to take a much tougher line with them. So far, we’ve mostly greeted news of Chinese attacks as just another reason to strengthen our defenses.

That needs to stop. These attacks are being driven from the top, so we need to go to the top and make it clear we won’t stand for it. One good step would be to remove from the table the idea that Xi will ever be welcome to officially visit the US until he learns better manners. After all, just delaying a visit by the Secretary of State last year seems to have struck a chord with them. That delay was because of a balloon overflight, for heaven’s sake. Isn’t what’s happening now a lot more serious?

Fortunately, I think China is more likely to listen to reason – when it’s backed by clear evidence of firm intent – than Russia is. Let’s help them along the path to righteousness, before it’s too late and we have to cut off trade with them. That would be as disastrous for the US as it would be for China.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is available now on Amazon, in paperback and Kindle editions.

 

Tuesday, February 20, 2024

"Introduction to SBOM and VEX" - Both versions are available!


My book, which I’ve been working on for three years, is now available on Amazon, both in the US and  internationally; it is also available with some other international distributors. The Kindle version costs $9.99 while the paperback version costs $25.00 (the content is the same in both versions)[i]. Since the paperback version is printed on demand at the Amazon printing facility closest to you, it is always "in stock" and will be shipped by the next day.

Note to international readers: It seems that at least one Amazon site, amazon.fr, only shows you one version (paperback or Kindle) of the book if you search on the title. However, if you click on the image that comes up, you should be able to order both versions. If anyone is having trouble ordering from any Amazon site, please email me at tom@tomalrich.com.

SBOM (Software Bill of Materials) and VEX (Vulnerability Exploitability eXchange) are machine-readable documents that allow organizations that use software (i.e., just about every organization on the planet) to learn about the most important source of risk in the software they use: the third-party components that make up about 90 percent of the code in most software products in use today.

Why is it important to learn about third-party components? Even though your organization may be satisfied that your software suppliers follow secure development practices in writing their own code, that code only accounts for about 10 percent of the product. To learn about vulnerabilities (and other risks) in the other 90 percent of the product, you need to have current SBOM and VEX documents. 

However, this is where the problem comes in: Software suppliers are already producing SBOMs in large volume to manage vulnerabilities in products they’re developing, but very few suppliers provide an updated SBOM to their customers with each new version of their product (this is essential. You'll learn why by reading the book). 

Is this because the suppliers are trying to hide problems in the components? That may be true for a small number of them, but most suppliers I’ve talked to say the reason why they’re not providing SBOMs to their customers is that the customers aren’t asking for them. Meanwhile, the customers aren’t asking for SBOMs because, among other reasons, they don’t have low cost, commercially supported tools or services to utilize them.

How can we break this logjam, so that software users will be able to learn about risks in the software they use and work with their suppliers to reduce these risks? That’s the primary concern of this book. While there’s no magic fix for the problems, a workable fix for the most important use case - daily tracking of exploitable component vulnerabilities in software used by the organization - will likely be testable by the second half of 2024. The book closes with a discussion of the OWASP SBOM Forum's idea (perhaps "plan" is too strong a word at this point) to start testing this in a large proof of concept this year.

In writing the book, I kept in mind that, while some readers might already be familiar with every concept discussed in the book, others may have no more than a basic understanding of software security. Rather than focus on one audience or the other, I've identified chapters that the latter folks can safely skip (i.e., without losing the general thread of the book), with the word "Advanced" in the chapter title. You can always return later to review those chapters. See "How to 'shorten' this book" in the Preface.

I hope you’ll read the book and leave your comments on the Amazon page or LinkedIn. I’d also appreciate hearing your comments myself, whether good or bad (believe it or not, I like the bad comments as much as the good ones, as long as they point to something I can change). Just drop me an email at tom@tomalrich.com.

Note: You don’t have to own a Kindle device to read the Kindle version of this book. You can download the free Kindle viewer for iOS, Android, Mac or PC here


[i] The price will appear in your local currency in most cases. In some currencies, like Indian Rupees, the local price is lower than the US price, in US dollar terms.

Sunday, February 11, 2024

NERC CIP: A big security issue with SaaS


As the NERC community starts to move toward making full use of the cloud “legal” for all systems owned or operated by NERC entities, it is inevitable we will all learn of security issues that only come up with respect to cloud-based systems, and which are most likely not addressed by FedRAMP, ISO 27001/2 and other certifications.

One issue that I have learned about in the last two months, which only comes up with respect to SaaS (software as a service), is called “multi-tenancy”. It comes about when what was previously a software product sold to individual organizations, for installation in their individual environments, is moved to the cloud and offered to many organizations. The problem comes up because:

1.      Many applications have their own database to store user data. Of course, the database will have originally been designed to accommodate multiple users from a single organization. There should always be controls to prevent one user from seeing another user’s data, but they are never completely foolproof. However, since every user is presumably from the same organization, this normally does not create a big problem.

2.      When the application is moved to the cloud and becomes SaaS, often the vendor will assume that the controls that are already in place to prevent one user from seeing another user’s data are adequate to prevent a user from Organization A from seeing data of a user from Organization B, where A and B are both customers of the SaaS product.

3.      Often, this will be a good assumption, especially if there were no problems reported with the existing controls when the product was sold for standalone use.

4.      However, critical infrastructure (CI) is different. CI users are often sensitive to even a small possibility that someone from outside their organization – and especially someone from an organization or country that might be contemplating an attack on their CI in the future – will be able to see their data. When these users, or at least the organizations that employ them, learn that the data of the users of a SaaS product, no matter where they reside or work or who they work for, will all be stored in a single database, they may be concerned about this. And their regulators may be very concerned.

Two months ago, I learned of a previously standalone software product that was moved into the cloud as SaaS, without making any changes to the database. Users from all over the world and all industries are using the common database.

A staff member of the vendor assured me that none of their users had even mentioned this issue, let alone objected to it. However, it is safe to say that no NERC entities with high and/or medium impact BES Cyber Systems are using the SaaS product now (the standalone product is still being offered, although that will end in the future). I stated my opinion that such NERC entities may well have objections when they hear they will have to share this database.

I want to make it clear that “multi-tenant” databases (where “tenant” refers to separate organizations, not to individuals within an organization) are not in any way “forbidden” by NERC CIP. Of course, since the existing CIP standards were all drafted without any consideration of the cloud, this isn’t surprising. And even though a new standards drafting process has been scheduled to start in July, it isn’t at all certain that the new standards will in any way restrict or prohibit multi-tenancy in SaaS applications used by NERC entities.

This is because it isn’t clear whether and how muti-tenancy poses a risk to organizations that use SaaS, and if it does, what exactly that risk is. Especially when you consider that eliminating multi-tenancy altogether (i.e., each organization using the SaaS product having its own database instance) would be very expensive for the SaaS provider – and they would need to pass this cost on to their customers. Any “solution” to this problem would have to be weighed against its cost by considering risk: i.e., will the dollar value of risk avoided by the solution be greater than the dollar value of the costs of providing that solution?

Because this isn’t an open-and-shut cybersecurity question (like the question whether a critical infrastructure system should require strong passwords or multi-factor authentication), my guess is there won’t be a specific requirement – in whatever comes out of the new “Cloud CIP” standards drafting process) – forbidding multi-tenancy. At most, there may be a requirement for the NERC entity to consider this and other risks when choosing a new SaaS provider, and document how they (and/or the provider) are addressing those risks.

In fact, maybe this will just be an item for auditors to look at in a performance audit, and document an Area of Concern (with recommended mitigation steps and a fixed timeline) when needed. Not everything has to be a requirement with $1MM/day penalties!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

 

Friday, February 9, 2024

SBOM regulations are batting .000


I learned long ago about a debilitating occupational disease that afflicts people involved in the cybersecurity field. Here is how the disease evolves:

1.      Cybersecurity people know that their future depends in great measure on having a large and growing market for whatever they have to offer: consulting services, software tools, writing books, organizing trade shows, etc.

2.      They’re also humble enough to realize that whatever their selling points may be, there are lots of other people and organizations with similar selling points. There are very few of us (certainly including me!) that can count on future success based solely on our innate brilliance and charisma.

3.      Given the above, what’s plan B? Even though you might be smart, responsive, insightful and everything else a customer could possibly want, there’s something else required for you to be successful in the long run.

4.      Can you guess what it is? You’re right! If the market is growing (and the more rapidly, the better), it doesn’t matter that you have a lot of equally smart, responsive, insightful competitors. The need for whatever you have to sell is growing fast enough that there will always be a shortage of smart, responsive and insightful people and organizations. You may not be the cream of the crop in your particular niche, but you don’t need to be that, either. Even though there are some first-tier players who will be the first ones hired, there will always be enough remaining business so that you and the other second-tier players (nobody ever says they’re third tier, of course) can still do very well, thank you.

What can make the market for cybersecurity services and tools grow? I can think of three scenarios:

1.      From previously not being too concerned about cybersecurity, a large segment of mangers and directors of private and public organizations suddenly wake up one morning and decide they need to get their security act together and fast. In other words, all the lessons they’ve been hearing about the need for cybersecurity suddenly start making sense to them, without being driven by any external events; they suddenly call back the security people who have been cold calling them for years and ask when they can start to work for them (or ship them their latest and greatest tool). I assign a probability of somewhere between .00001 and -1.0 to this scenario.

2.      There’s a huge and devastating cyberattack, or even better, a series of devastating cyberattacks on different types of organizations. Fortunes are lost, secrets are exposed, lives are ruined, etc. This has a somewhat higher probability than the first scenario, although this is still unlikely.

3.      Tough regulations are imposed, which effectively require every organization to finally open their checkbooks and buy as many cybersecurity services and tools as possible. What’s the penalty? Death would be best, but maybe just big fines, a few years in the slammer….Be creative.

What’s the probability of the third scenario being realized? That’s the question…

I bring this up because a lot of people in the software bill of materials (SBOM) “industry” seem to be trying hard to convince everybody (including me, and perhaps themselves) that regulations are coming Real Soon Now. Those regulations will make it mandatory for software suppliers to provide SBOMs to their customers – and on a regular basis, not just as a one-off.

I used to be one of these people. When it became known in early 2021 that the White House was working on an executive order (EO) regarding cybersecurity, I learned it was likely that SBOMs would be addressed in the new EO. When that order came out, I convinced myself that, even though the only “requirement” for SBOMs was that federal agencies start asking for them from their suppliers, there would be a real requirement when the Office of Management and Budget (OMB) put out their guidance in about a year’s time (as required by the EO).

However, when OMB put out their guidance in September 2022, it left it up to each federal agency to decide whether and how to ask for SBOMs from suppliers. At that point (or really, before then), I decided it was unlikely that SBOMs would become a requirement under the EO.

Nevertheless, there were still the changes to the Federal Acquisition Regulation (FAR) that were required (using very general language) in EO 14028. When these changes were implemented, would those changes effectively make it mandatory for suppliers of software to federal agencies to provide regular SBOMs to those agencies? A lot of people clung to that hope, especially when it was announced in October that the FAR changes being proposed “would, among other things, require contractors to develop a Software Bill of Materials — or SBOM — for all software used when performing contracting tasks.”

Unfortunately, last week those hopes were set back, as described in this article:

SBOMs, or itemized lists of components that make up software products, have been widely viewed as a helpful tool in advancing software security by enabling organizations to identify potential exposures in their technology. But some argue that requiring SBOMs is cumbersome because various regulations have defined their scope differently. Lawmakers notably excluded a federal contractor SBOM measure from a must-pass defense policy bill in 2022. 

Most contractors “do not create their own software and instead use commercial off-the-shelf products for which SBOMs might not be readily available and may need to be generated specifically for the contractor and government transactions,” said a comment filed by Anderw Howell of the Operational Technology Cybersecurity Coalition, a group representing industrial control systems vendors.

The OTCC comments add that a separate SBOM memorandum from the Office of Management and Budget does not match that of the proposed rule, arguing that such a dynamic would give contractors a headache. The OMB memo lists SBOMs as an optional entity that can be provided upon request, while the contractor directive requires SBOMs be listed for all software used in a contracting job, regardless of a cybersecurity incident.

SBOM community members have also placed their hopes in two new sets of regulations, which at one point seemed likely to require SBOMs:

1.      The FDA’s Premarket Guidance for medical devices, which came out in October. Many people hoped this would require that medical device makers (MDMs) provide SBOMs to their customers (mostly hospitals) regularly. However, the FDA merely required that the MDM provide a single SBOM to the FDA with their required “premarket submission”, which is required for them to be allowed to market their device in the US. Moreover, this SBOM won’t be shared with any entity outside of the FDA.

2.      The EU Cyber Resilience Act, which has not received final approval from the EU Parliament but now seems to be at least in its close-to-final draft, was also considered likely to require that SBOMs be distributed to software and device customers on a regular basis. However, the draft (now approved by the EU Council) only includes the following statement regarding SBOMs (in Annex I, page 166): “Manufacturers of the products with digital elements shall…identify and document vulnerabilities and components contained in the product, including by drawing up a software bill of materials in a commonly used and machine-readable format covering at the very least the top-level dependencies of the product.” In other words, the CRA just requires the software or device manufacturer to “draw up” an SBOM and keep it, but says nothing about providing it to customers or anyone else.

In these three cases, we’ve been shown once again that there is no appetite among regulators to establish mandatory requirements for SBOMs. The reason for this is quite clear: There is currently no agreement among regulators, manufacturers, or end users regarding what should be included in an SBOM and how it should be implemented.

Here’s an idea: Let’s put a five-year moratorium (at least) on the idea that somehow SBOMs can be regulated into widespread use. I can think of no case in which a technology (other than a safety or health technology like seat belts or lists of ingredients in food) has been brought into use by regulations. Why should SBOMs be any different?

What can be done is to a) identify what barriers are currently preventing SBOMs from being put into widespread use, and b) identify how those barriers can be removed. The OWASP SBOM Forum is currently doing exactly this, especially regarding the naming problem and VEX. If you’d like to hear more about this, drop me an email.  

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

 

Thursday, February 1, 2024

“Forking” open source components

 

Note: My book “Introduction to SBOM and VEX” should be available on Amazon within 1-2 weeks (I’ll let you know when it’s available, of course). This is a section from the book. It addresses a topic I’ve always meant to address in a blog post – now I get two for the price of one!

Software suppliers often edit open source components before they incorporate them into their product. This is called “forking” the component, since the supplier has now created a different component from the one they started with. They do this for multiple reasons, including fixing a vulnerability in a component and making a change to the functionality of a component. Of course, under most open source product licenses, this is a completely legitimate practice.

However, when they do this, the supplier needs to keep in mind that they should no longer list the original component in their SBOM. This is because the altered code means there can no longer be any assurance that the forked component in the product is subject to the same vulnerabilities as the original component – and that the forked component is not subject to vulnerabilities that the original component is not subject to.

The supplier has the option of renaming the forked component and setting it up as a separate project on GitHub; they would then need to report vulnerabilities for it, make patches publicly available, etc. However, it will probably be much easier for the supplier just to treat the forked component as part of their own code, meaning from now on they will report all vulnerabilities under the product’s identifier.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.