Wednesday, March 5, 2025

Secure by Design has produced a great design, but not great security

 

Chris Hughes produced a great post this week, which I hope will wake a lot of people up. The post was inspired by an article by former CISA Director Jen Easterly; by some good luck, the post is still available on CISA’s website.

The post makes a point that Ms. Easterly has been making for at least a few years: that software developers aren’t doing enough to build security into their products. As a result, the public (and government) is still paying a huge price for the damage caused by attackers exploiting security bugs in software. Last May, CISA launched their Secure by Design pledge, which over 250 companies have taken. Ms. Easterly states, “Each new signer commits to prioritizing security at every stage of software development, creating a ripple effect that raises the standard across the industry.”

She concludes by saying, “The Secure by Design pledge, the resulting progress pledge signers are making, and the bipartisan support for cybersecurity all signal that there is a groundswell of support mounting behind this movement…I see individuals and organizations across the entire ecosystem—open source communities, universities, insurers, Venture Capital firms, researchers, and regulators—playing their part to prioritize and incentivize security. Best of all, I see our nation embracing this movement to make secure, resilient software, creating a safer digital world for everyone.”

Ms. Easterly continually analogizes to the automobile industry: Cars used to be built without any concern for safety, but a combination of public pressure and strong federal regulation (along with California’s pioneering regulations) turned that situation around quickly. She thinks this shows Secure by Design should work for the software industry as well.

However, Chris points out that, while there are only about 12-15 major car manufacturers and fewer than 1,000 manufacturers in total worldwide, there are easily hundreds of thousands of software developers, including a lot of one-person shops. So, just getting a relatively small number of huge developers to sign the pledge isn’t going to do much to secure software in general.

Moreover, Chris isn’t optimistic that signing a pledge will make a difference for most commercial software developers. After all, they’re in business to make money. He points to two big developers that constantly tout their focus on security, which both have reported large breaches in recent years. It isn’t that these companies are hypocrites; rather, it’s that revenue and speed to market will always take precedence over security for commercial products. We shouldn’t expect that to change quickly, if ever.

Even more importantly, there’s the fact that at least 70% (I’ve heard 90%) of the code in any modern software product consists of open source components. Chris says, “The number of open source components in products has tripled in the last four years alone.” Of course, since open source development communities have constant turnover, it’s hard to identify who could even make a Secure by Design pledge, or what it would mean if they made one.

Having made his point that it’s naïve to expect Secure by Design to have much effect on its own on the rapid growth in software vulnerabilities, Chris says in effect that we need to accept the fact that there will always be software vulnerabilities; moreover, it’s likely they will continue to increase at an increasing rate (i.e., the second derivative is positive).

Therefore (and here I’m going beyond what Chris said in his post), the software security community needs to give up the idea that vulnerabilities can be prevented at the source. Instead, we should focus on managing vulnerabilities at scale. Only by doing that can we be sure that whatever resources we have for that purpose are used to produce the greatest possible “bang for the buck” – i.e., mitigate as much vulnerability risk as possible.

What do we need to accomplish that goal? There’s one thing that’s absolutely required, if software vulnerability management is even going to be possible: When a user wants to learn about recently-identified vulnerabilities in a software product they use, or in the components of the product they have learned about through an SBOM, they should be able to search the National Vulnerability Database (NVD) or another vulnerability database and immediately be shown all recent CVEs that affect the product.

Moreover, before they make that search, the user should not need to conduct another search to find the identifier for the product; the identifier should be self-evident to them, based on information they already have. The identifier that enables this to happen is purl, which stands for “package URL”.

The purl identifier needs to be implemented in the CVE ecosystem, so that CVE Numbering Authorities[i] (CNAs) can utilize a purl to identify an open source software component that is affected by a CVE they report. Of course, currently CPE is the only software identifier supported by the CVE[ii] Program; that doesn’t have to go away. But purl has a lot of advantages over CPE for use in open source vulnerability databases that support CVE, such as OSS Index (which is based entirely on purl, not CPE).

There will be a workshop on incorporating purl into the CVE Program during the VulnCon conference in early April. The goal of the workshop will be to describe the benefits of allowing purls to be included in CVE Records and to determine whether there is support in the vulnerability management community to move forward with a proof of concept for this. If you will be at VulnCon, you should try to attend this workshop.

However, even after purl is incorporated into the CVE program and software users can look up open source components or products in CVE-based vulnerability databases by searching with a purl, one problem will remain: Purl’s biggest limitation is that it primarily supports open source software found in package managers, leaving CPE as the only identifier for commercial software products.

But CPE has multiple problems, the most important of which is the fact that today more than half of the new CVEs that were identified in 2024 don’t include a CPE identifier for the affected product. This means that a search in the NVD for the product using a CPE name won’t find any vulnerabilities for it. Thus, no search of the NVD – or any other vulnerability database - for a commercial software product will identify most recent CVEs that apply to that product.

The OWASP SBOM Forum has developed a high-level concept to extend purl to cover proprietary software; it is described in this document. We wish to make it more than just a concept, but a reality. Please let me know if you would like to discuss this further.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] For an introduction to CVE.org and the CVE Program, which includes the CNAs, see this post

[ii] If you are confused by discussion of CPE vs. CVE, remember this: they’re both identifiers, but CPE is an identifier for software products used by the NVD (and other databases derived from it), while CVE is an identifier for vulnerabilities themselves. CVE is by far the most widely used vulnerability identifier.

No comments:

Post a Comment