Sunday, May 4, 2025

“No funding issue” and other fairy stories

On April 23, Jen Easterly, former Director of CISA, put up a post on LinkedIn about the bizarre episode of April 15 and 16. On April 15, a leaked letter to CVE Board members from Yousry Barsoum, VP and Director of MITRE, revealed that the next day “the current contracting pathway for MITRE to develop, operate and modernize CVE and several other related programs…will expire.”

This led to a virtual firestorm in the software security community, since there is currently no replacement worldwide for the CVE Program; shutting it down abruptly would inevitably disrupt software security efforts worldwide. However, the next day, CISA announced “Last night, CISA executed the option period on the contract to ensure there will be no lapse in critical CVE services.” Thus, it seems the cavalry arrived in time to save the fort.

Ms. Easterly wrote a week later:

Today, CISA's Deputy Executive Assistant Director for Cybersecurity's Matt Hartman released a statement committing to the sustainment and evolution of the CVE program, including "to fostering (sic) inclusivity, active participation, and meaningful collaboration between the private sector and international governments to deliver the requisite stability and innovation to the CVE Program." The statement also clarified that there was no actual funding issue but rather an "administrative issue" that was resolved prior to a contract lapse.

In stating that “there was no actual funding issue”, she obviously intended to give comfort to her readers. After all, “administrative issues” happen all the time and don’t kill whole programs, while funding issues do kill programs. Therefore, she’s saying the worldwide alarm caused by Mr. Barsoum’s letter was misplaced. Move along…nothing to see here.

Unfortunately, this begs the question why Mr. Barsoum put out his letter and sent it to all the members (20+) of the CVE.org board, if the issue was so trivial. Why didn’t he just pick up the phone, find out what the “administrative issue” was and get it fixed? It also ignores the fact that a) many programs in the federal government have been literally cancelled overnight with no advance warning at all recently, and b) CISA is known to be in the process of letting a number of employees go, which almost always means some programs will need to be sacrificed as well.

In other words, Mr. Hartman’s assertion, and Ms. Easterly’s repetition of it, missed the main lesson of this whole sorry affair. To provide some background, the CVE Program was started from nothing in 1999. From the beginning, it was run by MITRE (in fact the idea for CVE first appeared in a paper written by two MITRE staff members that year), although it wasn’t called the “CVE Program” then. Since MITRE was already a US government contractor, it made sense for the government to engage MITRE to run the program. It can truly be said that the CVE Program might not exist at all today, were it not funded by the US government.

However, things change. Today, both governments and private industry worldwide are concerned about software vulnerabilities and rely on the CVE Program to faithfully identify and catalog those vulnerabilities. Given the worldwide use of CVE data, there is no reason why the US government should remain the sole funder of the program.

Yet that is exactly what Ms. Easterly advocates in the remainder of her post. She says, “Some parts of cybersecurity can and should be commercialized. Some should be supported by nonprofits. But vulnerability enumeration, the foundation of shared situational awareness, should be treated as a public good. This effort should be funded by the government and governed by independent stakeholders who are a balanced representation of the ecosystem, with government and industry members. CISA leading this effort as a public-private partnership assures the program is operated in service of the public interest.”

In other words, she thinks the private sector shouldn’t be funding the CVE Program, since it’s a public good that should only be funded by the public – i.e., the government (and CISA in particular). That would be wonderful if we lived in a world where the government is quite willing to fund cybersecurity initiatives and always stands behind their commitments. However, the likelihood that the CVE Program was almost shut down because – and I’m not going too far out on a limb in saying this – somebody who has no idea what it is decided it was a good candidate for defunding is in my mind prima facie evidence that its entire funding should not come from the US or any other government. 

To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome, but I will treat any donation of $25 or more as an annual subscription fee. Thanks!

But let’s suppose Mr. Hartman was correct in asserting there was no “funding issue”. In my (reasoned) opinion, that makes the case against exclusive government funding even stronger. Mr. Barsoum was clearly concerned that the CVE Program would be shut down, which strongly implies he knew the reason that might happen. If the reason was simply an administrative error – e.g., somebody forgot to check a box on some form – this means we’ll need to start worrying not only about funding cutoffs to the program, but about any administrative error that anybody at CISA, DHS, etc. might make. Does that give you a warm and fuzzy feeling?

I’m sorry, Ms. Easterly, but the CVE Program needs to be moved away from the federal government, although I hope the feds will still provide some of its funding. This doesn’t have to happen tomorrow, but it should at least be done when the contract has to be renewed next March; this is especially important since it’s quite likely the contract won’t be renewed then, anyway. If the software security community gets caught flat-footed again next year, we will have nobody to blame but ourselves. Tragedy repeated is farce.

Fortunately, the cavalry is already onsite and is planning for that eventuality. I’m referring to the CVE Foundation, a group that was already holding informal discussions before April 15, but which had not been formalized before then. When I saw the first announcement of it on April 16 – the announcement only had one name on it – I thought it might be a late April Fool’s Day prank. But the following week, it became clear that they have a great lineup of heavy hitters currently involved in the program, including CVE.org board members, heads of CVE working groups, and representatives of private industry.

Last week, this became even clearer, when I heard Pete Allor of Red Hat, CVE Board member and Co-Chair of the CVE Vulnerability Conference and Events Working Group, describe the success the CVE Foundation has had so far. They’ve lined up large companies and governments who have said they will be ready with funding when it comes time to make the break with Uncle Sam (although I certainly hope my dear Uncle will get over his hurt feelings and realize that a child leaving home because they have outgrown the need for incubation is an occasion for rejoicing, not barely concealed anger. After all, DNS was nurtured by the National Technology and Information Administration - NTIA - 40-50 years ago. When it was time to let DNS leave home, it found a truly international home in ICANN. At last report, DNS is still alive and well 😊. Perhaps CVE will find a similar home).

Fortunately, you don’t have to take my word for what Pete said. Last Thursday, Patrick Garrity of VulnCheck posted a link to an excellent podcast in which Pete went into a lot of detail on why the CVE Foundation was…well, founded, and the success they have had so far in lining up support (although he didn’t name potential financial supporters, of course). Then on Friday, Pete elaborated on what he’d said, under withering questioning by me and others at our regularly scheduled OWASP SBOM Forum meeting.

So, you don’t need to worry about whether the CVE Program will survive more than 11 months longer; the answer is yes. The real question is what changes need to be made to the program, both in the intermediate term and the longer term. Those will be interesting discussions, and I’m already trying to spark them. Stay tuned to this channel!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. And while you’re at it, please donate as well!

 

Friday, May 2, 2025

NERC CIP in the cloud: What are the real risks?

In a recent post, I described a document that was recently emailed to the “Plus list” for the NERC Standards Drafting Team (SDT) that is working on removing the barriers to full use of the cloud by NERC entities subject to CIP compliance. The well-written document, which has no official status, is a “discussion draft” of a new standard: CIP-016.

Like NERC Reliability Standards, the document includes a set of suggested requirements. Each suggested requirement loosely refers to one of the CIP standards. The author of this document assumes that CIP-016 will just apply to systems deployed in the cloud. On-premises systems will continue to be required to comply with standards CIP-002 through CIP-014, but those standards will now be understood not to apply to use of the cloud by those systems.

The suggested requirement that refers to CIP-013 reads, “The Responsible Entity shall perform risk assessments of cloud providers...This includes ensuring that all cloud providers comply with relevant security standards (e.g., SOC 2, FedRAMP).”

In other words, to comply with this suggested requirement, the NERC entity will need to:

1.      Perform a risk assessment of each cloud (service) provider, which presumably includes their Platform CSP (e.g., AWS or Azure); and

2.      “Ensure” that they comply with “security standards” like SOC 2 and FedRAMP. Neither of those is a security standard, so I’ll take the liberty of replacing those two names with “ISO 27001”, which definitely is a security standard.

In fact, these two “sub-requirements” are the same. This is because a risk assessment always needs to ascertain the subject’s compliance with a certain grouping of risks. In some cases, that grouping is called a standard; in others, it’s called a framework. Let’s say the NERC entity decides to assess the CSP based on ISO 27001. How are they going to do this?

One way for a NERC entity to assess a CSP based on ISO 27001 is to conduct a full audit; of course, the audit would need to (at least in principle) cover all the CSP’s data centers and systems. Is it likely that AWS or Azure would allow every NERC CIP customer to do this on their own, or that those customers, no matter how large, would have the resources to conduct this audit? Of course not.

The only realistic way for a NERC entity to perform a risk assessment of a CSP, based on ISO 27001 or any other true security standard, is to review the audit report and identify risks revealed in the report. For example, if the report noted a weakness in the CSP’s interactive remote access system, that would be one risk for the entity to make note of.

Since I believe CSPs will usually let customers see their cybersecurity audit reports, this would be a good way for NERC entities to assess their CSPs. However, given that there are only a small number of platform CSPs, why should each customer of “CSP A” have to request the same audit report, review it, and presumably identify a similar set of risks? Instead, why not have NERC itself – or perhaps a third party acting on NERC’s behalf – perform their own assessment of the CSP, then share the results with every NERC entity that utilizes the CSP’s services?

A word from our sponsor: To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please donate here. Any amount is welcome.

Of course, NERC wouldn’t be acting as a gatekeeper, determining whether the CSP is secure enough to merit designation as a “NERC authorized cloud provider” for entities subject to CIP compliance. Instead, it would be performing a service on behalf of many separate NERC entities. More importantly, since the CSP will know that they only need to be assessed once rather than once for every NERC CIP customer they have, they may be more open to having the assessors go beyond just an examination of the audit report.

That is, the CSP may be willing to have NERC ask them questions that are relevant to cloud providers, but are most likely not included in ISO 27001. For example, the Capital One breach in 2019 was due in part to the fact that many customers of one of the major platform CSPs had all made the same mistake when securing their environments in that CSP’s cloud. One of the CSP’s technical staff members, who had been terminated by the CSP, took revenge by breaking into – according to boasts she posted online – over 30 customers who had made the same mistake.

Of course, the fact that so many customers had made the same mistake should be taken as evidence that the CSP needed to beef up their cloud security training for their customers. Thus, one question that the NERC assessors could ask is what security training is provided to all customers at no additional cost, rather than simply being available for a fee. This question is almost certainly not included in an ISO 27001 audit.

Thus, I’m proposing that, in the new “cloud CIP” standard(s) that will be developed, NERC should be tasked with assessing cloud service providers in two ways: by reviewing their ISO 27001 audit report and by asking them questions that are most likely not asked in a normal assessment based on ISO 27001 (the current SDT should start thinking about what these questions should be).

NERC will review the audit report and the CSP’s answers to the cloud-specific questions, to identify risks that apply to this CSP; they will then pass those results to NERC entities that utilize the CSP’s services. NERC will not make any judgment on whether NERC entities can utilize the CSP’s services, or on measures that a NERC entity should take to mitigate the identified risks.

Of course, my suggestions above suffer from one little problem: NERC’s current Rules of Procedure (RoP) would never allow NERC (or even a third party engaged by NERC) to assess a CSP and share the assessment results with NERC entities. As I stated in the post I referred to earlier, I believe that accommodating use of the cloud by all NERC entities that wish to do so will require changes to the RoP – even though doing so may require an additional 1-2 years, beyond what just redrafting the CIP standards would require. This is just one example of that.

If you have comments on this post, please email me at tom@tomalrich.com. And don’t forget to donate!

Wednesday, April 30, 2025

The version range snafu


It’s no exaggeration to say that the CVE Program’s recent near-death experience has set off a flurry of activity in planning for the future of vulnerability identification programs (like CVE) and vulnerability databases (like the NVD, as well as many others). In this recent post, I described three different approaches that different groups are taking toward this goal today. Of course, none of those approaches is better than the others; they’re all necessary.

The approach I prefer to take – partly because I don’t see anyone else taking it now – is to focus on improvements to the CVE Program that can be made by next March, when the MITRE contract to run the program will come up for renewal again. Since I lead the OWASP Vulnerability Database Working Group, we are taking this approach. Instead of focusing on what’s best for the long term (which is what the broader OWASP group is doing), we’re focusing on specific improvements to the program that can be realized by next March, all of which are necessary and many of which have been discussed for a long time.

Perhaps the most important of those improvements is version ranges in software identifiers. Software vulnerabilities are seldom found in a single version of a product. Instead, a vulnerability is first present in version A and continues to be present in all versions up to version B. The vulnerability is often first identified in B; when it’s identified, the investigators then realize it has been present in the product since version A; then they identify the entire range A to B as vulnerable.

For this reason, many CVE Records identify a range of versions, rather than just a single version or multiple distinct versions, as vulnerable to the CVE; however, this identification is made in the text of the record, not in the machine-readable CPE identifier that may or may not be included in the record.

This omission isn’t the fault of CPE, since CPE provides the capability to identify a version range as vulnerable, not just a single version. However, this capability is not used very often, for the simple reason that there is little or no tooling available that allows end users to take advantage of a version range included in a CPE name. The same goes for the purl identifier, which is widely used in open source vulnerability databases. Even though purl in theory supports the VERS specification of version ranges, in practice it is seldom used, due to the lack of end user tooling.

Why is there very little (or even no) end user tooling that can take advantage of version ranges in software identifiers found in vulnerability records? I learned the answer to that question when I asked vulnerability management professionals what advantage having such a capability in their tools would provide to end users (i.e., what the use case is for machine-readable descriptions of version ranges).

When I have asked this question, few if any of these professionals have even been able to describe what that advantage would be. It seems clear to me that, if few people can even articulate why a particular capability is required, tool developers are unlikely to try to include that capability in their products.

However, I can at least articulate how an end user organization could utilize version ranges included in a vulnerability notification like a CVE record: They will use it when a) a vulnerability has been identified in a range of versions of Product ABC, and b) the organization utilizes one or more versions of ABC and wants to know whether the version(s) they use is vulnerable to the CVE described in the notification.

Of course, in many or even most cases, the answer to this question is easily obtained. For example, if the product ABC version range included in the record for CVE-2025-12345 is 2.2 to 3.4 and the organization uses version 2.5, there’s no question that it falls within the range. But how about when the version in question is

1.      Version 2.5a?

2.      Version 3.1.1?

3.      Version 3.41?

More generally, the question is, “Of all the instances of Product ABC running in our organization, which ones are vulnerable to CVE-2025-12345?” Ideally, an automated tool would a) interpret a version range described in a CPE found in the CVE record, b) compare that interpretation with every instance of ABC found on the organization’s network, and c) quickly determine which instances are vulnerable and which are not.

How can the inherently ambiguous position of the three version strings listed above be resolved? The supplier of the product needs to follow a specific “ordering rule” when they assign version numbers to products; moreover, they need to inform their customers – as well as other organizations that need to know this – what that rule is. The portion of the rule that applies to each of the above strings might be

1.      “A version string that includes a number, but not a letter, precedes a string that includes the same number but includes a letter as well.”

2.      “The value of the first two digits in the version string determines whether that string precedes or follows any other string(s).”

3.      “The precedence of the version string is always determined by the value of the string itself.”

Of course, for an end user tool to properly interpret each version range, it would need access to the supplier’s ordering rule. If these were sufficiently standardized, rather than always being custom created, it might be possible to create a tool that would always properly interpret a version range.[i] However, they are not standardized now.

This means that the developer of an end user tool that can answer the question whether a particular version falls within a range will need to coordinate with the supplier of every product that might be scanned or otherwise addressed by their tool, to make sure they have the most recent version of their ordering rule; and they’ll have to receive every updated version of that rule. Doing this would be a nightmare and is therefore not likely to happen.

This would be much less of a nightmare if the ordering rules were standardized, along with the process by which they’re created and updated by suppliers, as well as utilized by end users and their service providers. However, that will require a lot of work and coordination. It’s not likely to happen very soon.

Ironically, all the progress that has been made in version range specification has been on the supplier side. A lot of work has gone into making sure that CPEs and purls (and other products like SBOM formats) are able to specify version ranges in a manner that is easily understandable by human users. However, that progress is mostly for naught, given that the required tooling on the end user side is probably years away, due to the current lack of standards for creating and utilizing ordering rules.

Unfortunately, I have to say it’s probably a waste of time to spend too much time today on specifying version ranges on the supplier end. The best way to get version ranges moving is probably to get a group together to develop specs for ordering rules.

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] If you are a fan of “semantic versioning” – a versioning scheme often used in open source projects - you might think that “ordering rules” are a primitive workaround. After all, if all software suppliers followed semantic versioning, they would all be in effect following the same ordering rule. However, commercial suppliers often decide that semantic versioning is too restricting, since if only allows a fixed number of versions between two endpoint versions.

Often, a commercial supplier will want to identify patched versions, upgrade versions, or even build numbers as separate versions. Semantic versioning provides three fields - X, Y and Z - in the version string “X.Y.Z”; moreover, the three fields have different meanings (major, minor and patch respectively), so one field can’t “overflow” into its neighbor. While open source projects may not find these three fields to be too limiting, commercial suppliers sometimes want more than three fields.

Tuesday, April 29, 2025

I need your help!

Since I started this blog in 2013, I’ve never asked for donations to support my work. However, because of a recent financial change, I’m now doing exactly that. I’m not looking for large donations - just a lot of smaller ones! But large or small, all donations are welcome. Please read this and consider donating. 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Saturday, April 26, 2025

NERC CIP: We’re as far from the cloud as ever

This past Wednesday, the NERC “Risk Management for Third-Party Cloud Services” Standards Drafting Team (SDT) emailed a document to the “Plus List”, which seems to be a starting point for discussions of a new CIP-016 standard to address the problems with use of the cloud by NERC entities.

While I admit I have not been able to attend any of the SDT meetings for months, and while I also appreciate that the team[i] is anxious to create something – something! – that moves them forward, I regret to say I don’t think this document moves them forward at all. Here are the main reasons why I say that.

The primary problem is that the draft standard is written like a NIST framework. That is, it seems to assume that the NERC auditors will audit its requirements in the same way that a federal agency audits itself for compliance with NIST 800-53. For example, control AC-1 in 800-53 reads:

The organization:

a. Develops, documents, and disseminates to [Assignment: organization-defined personnel or roles]:

1. An access control policy that addresses purpose, scope, roles, responsibilities, management commitment, coordination among organizational entities, and compliance; and

2. Procedures to facilitate the implementation of the access control policy and associated access controls; and

b. Reviews and updates the current: 1. Access control policy [Assignment: organization-defined frequency]; and 2. Access control procedures [Assignment: organization-defined frequency].

This requirement assumes that:

       i.          It is generally clear to both the auditor and auditee what an access control policy should contain. More specifically, it is clear to both parties what “purpose, scope, roles, responsibilities, management commitment, coordination among organizational entities, and compliance” should be addressed by the policy.

      ii.          Both auditor and auditee generally understand what procedures are required to “facilitate the implementation of the access control policy and associated access controls.”

     iii.          Auditor and auditee generally agree on what constitutes an adequate “review and update” of access control policies and procedures. For example, the auditor isn’t expecting the auditee to rewrite the policy from the ground up, and the auditee isn’t expecting to get away with skimming through the policy and just giving it their stamp of approval.

As far as most federal government agencies are concerned, the above three assumptions may well be valid. However, I strongly doubt they’re valid for NERC entities, who usually take the “Trust in Allah, but tie your camel” approach to dealing with auditors. Specifically, I know that one reason some of the NERC CIP requirements are very prescriptive is that NERC entities are afraid of requirements that give the auditors leeway in determining what a requirement means. Moreover, the auditors often share this fear, since they don’t want to be blamed for misinterpreting CIP requirements. Therefore, they usually want CIP requirements that constrain them enough that there can’t be much dispute over how a requirement should be interpreted.

However, while keeping in mind that this document is just a discussion draft and will never get beyond that stage, it’s important to note how it will likely result in many auditing controversies if it were to become a standard. Here are three examples:

1. There are many statements that are clearly open to big differences in interpretation. For example, Section 2.2 Scope reads, “CIP-016 applies to any systems, applications, or data stored, processed, or transmitted in cloud environments or hybrid cloud infrastructures. Systems that remain fully on-premise are not subject to this standard.”

Don’t “systems that remain fully on-premise(s)” often use “applications, or data stored, processed, or transmitted in cloud environments or hybrid cloud infrastructures”? If that use isn’t subject to the standard, then what is? Yet, by saying that on-prem systems aren’t subject to complying with CIP-016, it sounds like they’re immune to threats that come through use of the cloud.

Is it really true that only systems that are themselves located in the cloud (which today includes 0 high or medium impact systems) are affected by cloud-based threats? If so, that seems like a great argument for permanently prohibiting BES Cyber Systems, EACMS and PACS from being located in the cloud. Of course, that’s exactly the situation we have today. Why bother with changing the standards at all, since today they effectively prohibit use of the cloud by entities with high and medium impact systems?

2. The draft standard relies heavily on 20-25 new terms, each of which would have to be debated and voted on - then approved by FERC - before the standard could be enforced. I remember the heated debates over the relatively small number of new terms introduced with CIP version 5, especially the words “programmable” in “Cyber Asset” and “routable” in “External Routable Connectivity”. The debates over these two words were probably more heated than the debates over all the v5 requirements put together. Moreover, the debates over those two words literally went on for years; they were never resolved with a new definition.

The lesson of that experience is that it doesn’t help to “clarify” a requirement by introducing new Glossary terms, unless those terms are already widely understood. This is especially the case when a new Glossary term itself introduces new terms. For example, the undefined new term “Cloud Perimeter Control Plane” in the draft CIP-016 includes another undefined new term, “virtual security perimeter”. Both terms will need to be debated and balloted multiple times, should they be included in an actual draft of CIP-016.[ii]

3. One interesting requirement is R12, which is described as “CIP-013 equivalent”. It reads:

The Responsible Entity shall perform risk assessments of cloud providers and ensure that supply chain risks, including third-party vendors and subcontractors, are mitigated. This includes ensuring that all cloud providers comply with relevant security standards (e.g., SOC 2, FedRAMP).

My first reaction is that this is going to require the CSP to have a huge amount of involvement with each Responsible Entity customer. This includes:

       i.          Sharing information on their vendors and subcontractors, so the RE can “ensure” (a dangerous word to include in a mandatory requirement!) that those risks have been “mitigated”. How will the RE do this? Surely not by auditing each of the CSP’s thousands of vendors and subcontractors!

      ii.          Providing the RE with enough information that they can “ensure” the CSP complies with relevant security standards. Of course, the CSP should already have evidence of “compliance” with SOC2 and FedRAMP – although neither of those is a standard subject to compliance (a better example would be ISO 27001).

     iii.          However, the words “all cloud providers” will normally include more than the platform CSP (e.g., AWS or Azure). They also include any entity that provides services in the cloud – for example, SaaS providers, security service providers, etc. Is the Responsible Entity really going to have to ensure that each of these cloud providers “complies” with SOC 2 and FedRAMP, to say nothing of other “relevant security standards?”

Of course, this document is just meant to be the start of a discussion, so it would be unfair to treat it as if it were a draft of a proposed new standard. However, I think there is one overarching lesson to be taken away from this (which I have pointed out multiple times before): Any attempt to address the cloud in one or more NERC CIP standards is inevitably going to require changes to how the standards are audited. These changes will in turn require changes to the NERC Rules of Procedure and especially CMEP (the Compliance Monitoring and Enforcement Program).

Because of this, any draft of a new CIP standard(s) to address use of the cloud needs to include a discussion of what changes to the Rules of Procedure (RoP) and CMEP are required for the new requirements to be auditable. The primary RoP change that will be needed – and it has been needed for years – is a description of how risk-based requirements can be audited[iii]. There is no way that non-risk-based CIP requirements will ever work in the cloud.

Moreover, the process of making the RoP changes needs to get underway as soon as possible after the new standard(s) is drafted. RoP changes rarely happen, but it’s likely these changes will take at least a couple of years by themselves. Since I’m already saying that the CIP changes alone won’t come into effect before 2031 and since it’s possible the RoP changes will not start until the CIP changes have been approved by FERC, this means it might be 2032 or even 2033 before the entire package of both CIP and RoP changes is in place. Wouldn’t that be depressing?

It certainly would be depressing, but I’ll point out that it’s not likely the NERC CIP community will need to wait until 2033, 2032, or even 2031 for new “cloud CIP” standards to be in place. It’s possible they’ll come sooner than that, mainly because NERC could be forced to take a shortcut. There’s at least one “In case of fire, break glass” provision in the RoP, which allows NERC – at the direction of the Board of Trustees – to accelerate the standards development process, in the case where the lack of a standard threatens to damage BES reliability itself.

Needless to say, this provision has never been invoked (at least not regarding the CIP standards). However, the time when it’s needed may be fast approaching. See this post. 

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you are involved with NERC CIP compliance and would like to discuss issues related to “cloud CIP”, please email me at tom@tomalrich.com.


[i] The email made it clear that this is primarily the product of one team member, so it has no official status.

[ii] Of course, there’s no assurance now that the new “cloud CIP” standards will include a CIP-016 that looks anything like this one.

[iii] NERC’s term for this is “objectives-based”. They are basically equivalent.

Friday, April 25, 2025

Maybe it’s not so bad after all


Earlier this week, I wrote a post that pointed to the strong likelihood that the MITRE contract to run the CVE Program will not be renewed next March (even though it was renewed last week, despite an initial announcement that it might not be); I called for planning to start now to figure out what can replace it. I did this on the assumption that there is no group doing that already. However, it turns out I was wrong.

A friend emailed me yesterday to ask if I knew about the CVE Foundation. The initial news reports about the contract being terminated pointed to this group as one that said they would be able to take over if the termination actually happened. However, since the reports only named one individual (the reports said there were other people involved, but they weren’t ready to share their names), I didn’t know how much credence to put in their assertion.

It turns out that I should have kept following the story. Yesterday, the same friend pointed me to the list of names now found in the FAQ. Some of the most important members of the CVE Program are shown as participants (the three leading private industry representatives on the CVE Board are listed as officers of the corporation); this is clearly a serious organization.

In addition, my friend said the group has good reason to believe that, should the MITRE contract not be renewed next March, the necessary funding will be there for them to fund the program (including MITRE, of course) on their own.

Of course, this is good news. I’ll also add that there are still a lot of questions that should be answered to make the CVE Program better. However, in asking and answering those questions, we at least won’t have to worry about the CVE Program disappearing beneath our feet. 

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. background on the book and the link to order it, see this post.

 

Thursday, April 24, 2025

Meanwhile, back at the NVD

While the big news in the vulnerability management world last week was the near death of the CVE Program, this temporarily overshadowed the ongoing saga of the National Vulnerability Database (NVD). Since February 12, 2024, the NVD has stopped reliably performing one of its most important functions: adding CPE names (machine readable software identifiers) to new CVE (vulnerability) records. For a discussion of why having a CPE name with every CVE Record is so important, see this post.

At the end of December, in the post I just linked, I estimated that the NVD’s backlog of CVE records without CPE names was around 22,000, or 55% of the approximately 40,000 new CVE Records created in 2024. In my most recent post on the NVD’s problems written on March 19, I admitted I couldn’t estimate the backlog, although I noted that the “vulnerability historian” Brian Martin thought the NVD had completely stopped creating new CPE names altogether.

Brian has kept following the NVD (which he says has “returned”). Last week, he put up this post on LinkedIn. It illustrates how the NVD has been doing its best to disguise the huge backlog of “unenriched” CVE Records (i.e., those that have not had CPE names and CVSS scores added to them – both of which are NVD functions). Without going into details, Brian said the backlog of unenriched CVEs (since early 2024) was now 33,699. So, far from making progress getting rid of the backlog in 2025, the NVD has dug the hole deeper.

Of course, the backlog number would be more meaningful if it’s expressed as a percentage of new CVE Records published since early 2024. Since the full year 2024 number of new records was about 40,000 and we recently finished the first quarter of 2025, I estimate there have been 50,000 new CVEs published since early 2024. This means that the 33,699 backlog constitutes 67% of the new CVE Records published since the NVD started having their problems last February 12.

In other words, the backlog as a percentage of new CVE records has grown by 12%. This obviously discredits the NVD’s preferred excuse for their problems: The volume of new CVE records has jumped and they’re struggling to keep up with it. That might explain the growth in the backlog itself, but it doesn’t explain a significant increase (in just 3 months!) in the percentage of CVE records that are unenriched (i.e., are in the backlog).

So what’s the NVD’s plan for finally eliminating this backlog? The last time they said anything about this was March 19, when they commented on their website:

We are currently processing incoming CVEs at roughly the rate we had sustained prior to the processing slowdown in spring and early summer of 2024. However, CVE submissions increased 32 percent in 2024, and that prior processing rate is no longer sufficient to keep up with incoming submissions. As a result, the backlog is still growing.

We anticipate that the rate of submissions will continue to increase in 2025. The fact that vulnerabilities are increasing means that the NVD is more important than ever in protecting our nation’s infrastructure. However, it also points to increasing challenges ahead.

To address these challenges, we are working to increase efficiency by improving our internal processes, and we are exploring the use of machine learning to automate certain processing tasks.

There’s one phrase in this statement that I strongly agree with: “the NVD is more important than ever in protecting our nation’s infrastructure.” That’s why this whole debacle is so appalling.

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, April 23, 2025

March 2026

There’s widespread agreement within the vulnerability management community that the CVE Program’s near-death experience last week is a wake-up call to do something, although there isn’t agreement on what “something” is. I now see three possible courses of action. They are all valid, but they differ according to the time horizon involved:

1. Some people are focused on making sure the CVE Program, as currently constituted, can survive the many attacks that are likely to be aimed at it in the next year.  While there might be some tweaks made, the point is to keep the program running, not to make substantial changes. I support the survival of the current CVE Program, and if survival is the best that can be hoped for (although hopefully with the CVE Record Format amended to permit purl identifiers, along with CPEs), I can’t argue with that outcome.

2. Other people – notably the OWASP Board - are taking exactly the opposite approach: They point out that software vulnerabilities as currently conceived form just one portion of “the demands of a rapidly evolving global threat landscape.” They’re calling for redesigning the CVE Program as a federated model that will address new threats like weak cryptography and AI weaknesses. My idea for the Global Vulnerability Database is very like this, especially in that it calls for a federated approach. I will definitely participate in this effort, since it is what is needed in the long run. However, there’s no doubt this will require a years-long effort.

3. A third group of people, which includes me, is more focused on March 2026. That’s when the MITRE contract will need to be renewed again. We all hope the contract will be automatically renewed, but after the events last week we would be fools to assume that will happen. Instead, we need to assume the contract won’t be renewed. This means that in March 2026, we will need to have an alternative to the CVE Program specified and ready to implement.

I think there are two realistic alternatives to the current CVE Program:

       i.          A program that adheres as closely as possible to the current CVE Program, warts and all.

      ii.          A program that follows the outline of the current program but incorporates changes that have been discussed and planned beforehand. In other words, rather than just implementing a CVE program much like the one we have in place now, we should plan ahead for March 2026, so that we implement improvements that can’t be made while the current program is up and running. Doing that isn’t as satisfying as redesigning the program from the ground up, but it can at least be accomplished by next March.

Therefore, I’m suggesting that the vulnerability management community start discussing “intermediate term” questions like the following:

a.      Should the organization that identifies vulnerabilities (e.g., the CVE Program) be separate from the vulnerability database (e.g., the NVD), as is the case now? I should point out that this separation is unusual in the vulnerability database world, since most other vulnerability databases (besides the ones that are modeled on the NVD) are focused on open source software and curate their own vulnerabilities. On the other hand, none of these other databases comes close to approximating the scale of the CVE Program and the NVD.

b.      The CVE Program is there to serve end user organizations, but with 290,000 CVEs today, the only effective way to do that is through automation of the end user’s VM processes. How can the CVE Program focus on end-to-end automated vulnerability management for user organizations? Machine readable vulnerability notifications, prepared by or on behalf of suppliers today, can specify individual versions or version ranges. However, can today’s end user vulnerability management systems utilize version range notifications in an automated manner? If so, what goal will they accomplish by doing so?  

c.      Since the user organization usually has the choice of whether to apply a patch, patches can’t be represented the same as actual versions. On the other hand, vulnerability records need to be able to represent which previous patches have been applied. How can these contradictory goals both be accomplished in a machine readable fashion?

d.      Can vulnerability identifiers, such as CVEs and GitHub Security Advisories (GHSA) be “harmonized”?

e.      Can software identifiers, including CPE and purl, be harmonized?

f.       If neither vulnerability nor software identifiers can be harmonized, how can vulnerability databases be “federated”? That is, how can vulnerability databases that use different identifiers respond jointly to a single query? This is a fundamental question that needs to be answered before the Global Vulnerability Database can be designed.[i]

There are also important questions that need to be asked about intelligent devices.

g.      How can devices be identified in vulnerability databases? There are many problems with CPE identifiers for devices, besides the well-documented problem that since last February, CPE numbers have not been produced by the NVD in anywhere near the volume they should be; this means that most devices identified as vulnerable in a CVE Record created after February 2024 will be invisible to an automated search today.

h.      The SBOM Forum’s 2022 white paper on software identification in the NVD suggested (on pages 12 and 13) that two of the standards from the GS1 family, GTIN and GMN, could be utilized as device identifiers, since they are already widely used in international trade. There are other options as well, but there needs to be discussion of this question, especially since the US Federal Trade Commission (FTC) is now at least proposing to implement a “device cybersecurity labeling program” for IoT devices. It’s hard to discuss cybersecurity for IoT without being able to learn about software and firmware vulnerabilities that apply to a device.

i.       How should vulnerabilities be reported for intelligent devices? Should they be reported using the identifiers for the individual software and firmware products installed in the device or using the identifier for the device itself? The latter option makes it much easier for users to learn about vulnerabilities that affect devices they rely on, since the user doesn’t need to have an up-to-date software bill of materials (SBOM) for each device they operate. This is the option that some big device manufacturers like Cisco and Schneider Electric have chosen to follow. However, it seems the great majority of intelligent device makers, including medical device makers, don’t report vulnerabilities to the CVE Program at all, making their devices invisible to NVD users.[ii]

j.       Speaking of intelligent devices, there’s a fundamental contradiction at the heart of patching them. In most cases, a device user is not able to apply a patch for a single vulnerability or subset of vulnerabilities; instead, they need to wait for the next full device update from the manufacturer.

k.      Because of this, the device manufacturer might delay notification of a vulnerability if the next full update is not imminent, even though they have developed the patch for it. However, delaying the notification will leave users unaware that their device is affected by the vulnerability. This means they are unlikely to apply other mitigations, like removing the device from their network or isolating it on its own segment. This problem almost certainly doesn’t have a simple answer, but it at least needs to be brought into the open, so that users can be aware of it.

My point in this post is that, while the fundamentals of vulnerabilities and vulnerability databases need to be rethought in the long run, there’s also a need to consider intermediate-term questions like the ones described above. As many of these questions as possible need to be answered by March 2026, since it’s quite possible that the vulnerability management community will find itself in a real crisis then (not just a 24-hour one). It would be good to be able to implement some solid changes to the current CVE Program, even though they’re not the ones we would implement if we were given 1-3 more years to do so.

Would you like to participate in these discussions? The OWASP SBOM Forum sponsors a Vulnerability Database Working Group that meets every other Tuesday at 11AM Eastern Time (April 29 is the next meeting); this group discusses intermediate-term questions like these. And the SBOM Forum itself meets on Fridays at 1PM ET (May 2 is the next meeting). That group discusses lots of ideas, including long-term ones. Drop me an email if you would likean invitation.l

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] I am sure this question can be answered. AI will probably play a large role in that answer.

[ii] CISA maintains a database of vulnerabilities for industrial devices and medical devices. However, since the vulnerability identifiers are not CVEs, a user of the NVD or another CVE-based database will not usually learn of those vulnerabilities, unless the manufacturer has also submitted the vulnerability to the CVE Program.

Sunday, April 20, 2025

Preparing for the Global Vulnerability Database

The near-death experience of the CVE Program (aka “MITRE”) last week was a huge wakeup call for the international vulnerability management ecosystem; this is because what was at stake with CVE.org’s problems was far worse than what’s at stake with the National Vulnerability Database (NVD)’s problems in the past year. The NVD is the biggest user of data from the CVE Program, but the consequences of losing CVE itself would be far larger.

What is most interesting is that just about everyone seems to be drawing the same two conclusions from this episode:

First, the current US-centric CVE/NVD vulnerability management program needs to be replaced with a truly international program, in which no one country or government plays a predominant role; and

Second, there is no point in even talking about creating a single huge database that includes all vulnerabilities of all types and all software products of all types. While that idea has a lot of intellectual appeal, it requires there be a single vulnerability identifier that can encompass all vulnerabilities, as well as a single software identifier that can encompass all types of software. It will be a long time – if ever – before either of those two dreams is realized.

This is why the ideas that are now being discussed for a new vulnerability management program to replace CVE.org and the NVD all focus on the idea of a federation of existing databases, linked by an intelligent querying infrastructure. This would have been hard to put together even ten years ago, but today – especially given the prevalence of AI – it doesn’t sound hard at all.

The idea of federation is especially appealing when you consider the huge cost of trying to unify all the world’s existing vulnerability databases into one uber database and, even worse, the cost of updating all that information in real time. It’s much better to let the staff members of each individual database continue to update their database using the sources and methods they’ve built up over the years; the federated structure will do its best to make it possible to query across all those databases, and receive as unified a response as is possible in this less-than-ideal world in which we live.

So, where do we go from here – i.e., what path do we need to follow to reach the common goal of a global federated vulnerability database, as well as a global federated vulnerability identification program? I saw two proposals at the end of last week.

The first proposal was articulated by Steve Springett, Chair of the OWASP CycloneDX project and Vice Chair of the OWASP Global Board of Directors (he emphasized in a conversation on Friday that the proposal as from the OWASP Board, not just himself). In the document, Steve emphasizes that the threats facing the software community today are quite different from what they were more than two decades ago, when the CVE Program and the NVD were in their infancy.

Steve especially points to vulnerabilities found in open source software, which was also in its infancy two decades ago but now is found (as components) in at least 90% of all software produced today. Steve also points to other relatively new threats, including cryptographic weaknesses and cybersecurity issues with AI. Steve concludes by saying:

We are calling on governments, industry leaders, researchers, and community experts to contribute their voices, expertise, and resources. Together, we can build an alternative model that complements existing efforts, gradually replacing outdated approaches with a federated, community-driven, and international standard.

The future of cybersecurity identification depends on global collaboration. Let’s build it together.

If you want to be kept informed about this initiative, send an email to cve@owasp.org. I will certainly participate in this.

A second proposal is from Olle Johansson of Sweden, who is also an OWASP member and a member of the OWASP SBOM Forum (as is Steve). Olle is suggesting a somewhat different approach: first describe the organization that will be needed to build and manage “a global platform for vulnerability reporting”, including what will be the roles of the different players in the vulnerability management ecosystem, including national governments, commercial software suppliers, open source projects, and software end users. He points out that only after we have figured all of this out will we be able to start filling in the technical details of the project. Olle is inviting interested parties to edit and comment on his proposal, which is a Google Doc.

A third proposal is from…me. While I’ve been writing for more than a year about the need for what I call the Global Vulnerability Database (GVD) – my most recent post on this subject is here – I’ve always thought of this as a project that’s waiting for its time to come.

Well, it seems its time has come, so here’s my proposal: While I agree that both Steve’s and Olle’s proposals need to be pursued, I also think we need to start talking about technical issues. I don’t mean the minute issues like whether EPSS scores should be included in CVE Records, but the more general issues that there’s been a lot of talk about, but no resolution.

A perfect example of this is version ranges. Everyone agrees this is an important issue, but nobody agrees on what to do about it. It is a fact that a vulnerability almost always affects a range of versions of a product, not just a single version. The biggest problem with version ranges is, as far as I know, there are no vulnerability management tools on the end user side that can ingest a version range in a CVE Record and then, for example, go to an asset management database and mark every version in the range as vulnerable.

Could this problem be solved if we made version range, not an individual version, the default in a software identifier like CPE or purl? That might make it easier to write the code on the end user side.

In any case, this is just one example of issues I would like to see addressed now, even though there are larger issues like Steve’s and Olle’s that need to be addressed as well. Fortunately, I already have a venue where we can have these discussions: the weekly meetings of the OWASP Vulnerability Database Working Group, which is a part of the OWASP SBOM Forum.

The VDWG meets biweekly on Tuesdays at 11AM Eastern Time. If you would like to join our next meeting on April 29 (where we’ll start to discuss the version range question), please email me and I’ll send you the series invitation (if you would like to join the SBOM Forum’s meetings, they’re also biweekly, but at 1PM ET on Fridays. Let me know if you would like that invitation as well). 

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post. 

Wednesday, April 16, 2025

Stay of Execution

Today, CISA – which has been the exclusive funder of the MITRE contract to run the CVE Program – announced that it will renew the contract after all. Thus, it seems we can count on the CVE Program being in place for another year.

However, I don’t need to tell you this is no way to run a railroad. Given the NVD’s problems that started last February and seem to be only getting worse as time goes by - and now given the almost-loss of the CVE Program - it is clear that government-run programs no longer make sense, even though they may have been required in the early days of vulnerability management.

As I mentioned in my post yesterday, the OWASP SBOM Forum, a group that I lead that has been discussing vulnerability database and identification issues since the NVD’s semi-collapse in February 2024, will discuss the way forward on this issue at our regular bi-weekly meeting on Friday at 1PM ET. If you would like to join us, please drop me an email at the address below. 

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com

 

Tuesday, April 15, 2025

CVE circles the drain

When I wrote this post barely more than two weeks ago, it seemed to be like sounds of a faraway battle that might eventually start spilling into your region. You need to pay attention to those sounds, but you’ll get a lot more warnings before the battle starts to impact you directly. In other words, there’s no need to take your family and flee your home now. The army will protect you from the invaders; after all, look at that huge fort they’ve been building for years – it looks like it could withstand a two-year siege!

However, it seems I was wrong. Not only has the battle reached our region, but the fort was overwhelmed before it could even mount a defense – or before the defenders even knew they were in danger. In fact, it’s too late to even think about fleeing. We just have to stand silently as the victorious attackers parade through our streets and stare scornfully at their vanquished foes.

Perhaps I’m letting my metaphors carry me away a little, but this is without doubt a turning point in the vulnerability management timeline. After all, the first 300 or so CVEs were reported in 1999; last year, the total reached around 275,000. Moreover, the rate at which new CVEs are being identified is growing by leaps and bounds every year. As VulnCon showed two weeks ago, the cybersecurity community is increasingly coming to realize that software vulnerabilities are at the root of almost all the serious cybersecurity threats – e.g., ransomware – that we face. Vulnerabilities will never be eliminated, but they can certainly be managed.

Or so we hope.

Are we lost? After all, MITRE researchers came up with the idea for CVE in 1999 and MITRE has run the program to identify and document new CVEs since then – in fact, the CVE Program and the database it ran used to be called MITRE. Today, both the program and the database are called CVE.org. An independent board, consisting of public and private sector representatives, runs the CVE program. Funding for CVE.org now comes entirely from CISA (or at least it did).

It's hard to think that the CVE Program might stop dead in its tracks, yet when a contract is cancelled, that’s usually what happens. But don’t worry, we’ve been given plenty of notice. The contract expires tomorrow, April 16. We have almost 24 hours to continue to enjoy the fact that MITRE still breathes the same air we do!

But what comes on Thursday? I assume no more new CVE Records will be produced, although the existing CVE Records won’t go away. You’ll still be able to learn about many previously identified CVEs (although the serious problem I discussed in this post remains. In fact, the remedy I prescribed, implementing purl as an alternative identifier in the CVE Program, is even more important now).

Also keep in mind that there are other vulnerability types besides CVE, such as GitHub Security Advisories (GHSA) and OSV; they shouldn’t be affected by this at all. On the other hand, the 275,000 vulnerabilities in CVE.org dwarfs both of these databases, as well as the other open source security advisory databases that are mostly specific to particular ecosystems like Python. There’s no disguising the fact that the software vulnerability management universe is going to become very tightly constricted two days from today.

Fortunately, there has been ample warning that the current US government-centric system, including the National Vulnerability Database (NVD) and CVE.org, isn’t sustainable. After all, 14 months ago the NVD fell seriously behind in their self-assigned responsibility to produce CPE names and add them to CVE Records (which are, of course, produced by CVE.org. CVE is part of DHS, while the NVD is part of NIST, which is part of the Department of Commerce. I recommend you reread the beginning of this post, in which I described the two organizations). Not only has the NVD not made up the ground it lost, but it continues to lose more ground almost every day.

More than a year ago, I started talking about a Global Vulnerability Database; I have refined the idea, and I summarized it in this post 11 days ago. As you can see, the GVD won’t be a single database. Instead, it will be a federation of existing vulnerability databases (probably including the NVD and CVE.org).

I’m going to stop now; perhaps I’ll write one or two more posts on this topic this week. However, I’ve already made the decision that this Friday’s meeting of the OWASP SBOM Forum (held every other week at 1PM EDT) will be devoted entirely to this topic. In fact, we’ll probably keep doing that for a while – and we might form a separate project just to start discussing – and eventually implementing – the Global Vulnerability Database.

If you aren’t currently a member of the SBOM Forum and would like to join us this Friday and perhaps afterwards, please drop me an email.

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.