Tuesday, March 5, 2024

It’s time to start paying our technical debt

 

The Wall Street Journal recently carried a well-written column[i] on a problem that has been discussed a lot in the software and military communities, but not in the broader corporate environment: “technical debt”.  The columnist, Christopher Mims, states the problem very well:

Underneath the shiny and the new, lurking in IT systems where it creates security vulnerabilities and barriers to innovation, is an accumulation of quick fixes and outdated systems never intended for their current use, all of which are badly in need of updating.

This technical debt would require $1.52 trillion to fix, and costs the U.S. $2.41 trillion a year in cybersecurity and operational failures, failed development projects, and maintenance of outdated systems, according to a 2022 report by a software industry-funded nonprofit. That’s more than 2.5 times what the U.S. government pays in annual interest on the national debt.

As might be expected, this problem has many facets. One of the most important is that old software is likely to contain a lot of vulnerabilities, which I hereby designate “technical security debt”. How can technical security debt be “paid down”? The answer at first glance seems quite simple: Scan your software regularly[ii], then patch all the old vulnerabilities. However, there are some big problems standing in the way of doing this.

The most important of those problems, for many if not most organizations, is that they already have a long queue of patches to apply. No matter how much effort they put into reducing the size of the queue, new vulnerabilities are being added to it all the time. In fact, most organizations probably realized long ago that they’re unlikely to reach the end of that queue in any time short of the remaining life of the universe. This means that organizations (public and private) need to have a prioritization methodology to decide which systems and which vulnerabilities will be patched first.

A vulnerability patch prioritization methodology can be based on vulnerability scores like EPSS and CVSS, as well as other measures like the CISA Key Exploitable Vulnerabilities (KEV) list – and there are much more sophisticated methodologies as well. Since there’s no perfect methodology, the best that can be done is to continuously refine the organization’s existing methodology, so that every dollar spent on patching is applied where it will do the most good.

Another problem: It’s indisputable that you can’t apply a patch you don’t have. On rare occasions, an organization whose primary activity is not software development (i.e. just about every organization on the planet) will be able to develop a patch for a third party software product themselves. However, in almost all cases, properly mitigating software vulnerabilities requires that the supplier of the software (which might be an open source community) do two things:

1.      Develop a patch for the vulnerability and make it available to their customers, and

2.      Report the vulnerability to CVE.org or another organization that tracks and publishes information on software vulnerabilities. This should be done as soon as possible after the patch has been made available. The CVE report should include patch information.

Are software and intelligent device suppliers currently doing both of these things? Large software suppliers are doing both, and the largest of those suppliers seem to be doing an excellent job – although, not surprisingly, the record gets spottier as you look at smaller suppliers.

However, intelligent device manufacturers of all sizes are doing a dismal job in both areas. Many device manufacturers, including some of the largest medical device manufacturers, don’t provide patches for their products more than once a year, and some less frequently than that. Moreover, it seems that few device manufacturers report vulnerabilities in their products to CVE.org at all. Until the manufacturers get their act together (at least by releasing quarterly vulnerability patch updates for their products and reporting vulnerabilities once the update has been released), “device security” will remain an oxymoron.

Another problem that inhibits “payment” of technical debt is the “naming problem”. This refers to the fact that most software products are known by different identifiers (purl, CPE name, OSV identifier, etc.) in different contexts. A user may know one identifier for a software product (perhaps a purl identifier, in the case of an open source project), but if they try to look it up in a vulnerability database that utilizes a different identifier (for example, the National Vulnerability Database, which uses CPE), they will likely come up empty-handed.

Many security professionals don’t even realize this is a problem, since almost everything written about vulnerability management today (including most of what has been written about using SBOMs to manage component vulnerabilities) assumes that learning about vulnerabilities in a software product or component just requires a single database lookup. Would it were so easy!

Many people (including me, a couple of years ago) believe that the only way this problem will be solved is by choosing a single identifier and convincing the whole world – including all vulnerability databases - to utilize only that identifier. However, this way of thinking ignores the fact that different identifiers serve different purposes, and they can’t simply be swapped for each other (it also ignores the fact that trying to enforce such a massive change would be an exercise in futility).

For example, purl is now the leading identifier for open source software in vulnerability databases, but it doesn’t cover all open source types and it can’t currently be used to identify proprietary software products. CPE is by far the leading identifier for proprietary software products, but it has a lot of deficiencies, as described on pages 4-6 of this paper by the OWASP SBOM Forum. CPE is also currently the only identifier for intelligent devices, even though it has a lot of limitations for that purpose as well. The SBOM Forum’s paper describes (on pages 12-13) a much better identifier for devices, which has the huge advantage of already being in widespread use in global trade.

However, despite the limitations and deficiencies of purl and CPE, they are already in widespread use in vulnerability databases; there are few if any major vulnerability databases that are not based on one or the other. What would be the point in choosing one of them (or even worse, choosing a different identifier altogether), and trying to force all vulnerability databases and software suppliers to standardize on it?

The usual answer, which is at least implicit in any statement about the need to have one identifier, is that only by having a single identifier will it ever be possible to have a universal vulnerability database. I agree that having a universal database (which I call the “Global Vulnerability Database”) should be the goal, but it doesn’t have to be achieved by shutting down all the other vulnerability databases worldwide and merging their content into a single “harmonized” structure. Instead, as described in this post, the existing databases could all continue to be maintained separately[iii], but there could be a centralized “switching hub” to route queries among the different databases.

When you consider this possibility, the naming problem literally disappears - at least for the vulnerability management variant of the problem (there are other variants of the problem, like the need to track a single product through mergers and acquisitions and the need for a single product name in legal documents. These require different solutions than does the vulnerability management variant). At that point, the problem changes from not having a single identifier (the naming problem) to not having a single “query point” for all software and device vulnerabilities (the “vulnerability database problem”). I am calling that query point the Global Vulnerability Database.

To summarize what I’ve just said, the naming problem will never be “solved” for vulnerability management purposes, in the sense that there will be a single identifier for every software product and intelligent device in vulnerability databases. However, the “vulnerability database” problem, in the sense that all vulnerability information will be accessible from a single query point, can almost surely be solved – and probably within a few years.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For context and the link to order it, see this post.


[i] If you aren’t a WSJ subscriber, you will run into a paywall at this link, asking you to subscribe (although it may offer you a free trial subscription). I can’t reproduce the article here, but I can email you a PDF if you send me an email.

[ii] Another step that software users should take is to utilize software bills of materials (SBOMs) provided by the suppliers of the software they use, to identify components that have vulnerabilities – and then to use an open source tool like Dependency Track to identify vulnerabilities in those components. However, given that SBOMs are not being regularly distributed to those users by the suppliers, it’s no surprise that users aren’t taking advantage of them.  My book, Introduction to SBOM and VEX, discusses this problem, and steps that are required to solve it.

[iii] Although they could be continually “mirrored” to a single physical location to minimize delays in responding to queries.

No comments:

Post a Comment