Sunday, May 4, 2025

“No funding issue” and other fairy stories

On April 23, Jen Easterly, former Director of CISA, put up a post on LinkedIn about the bizarre episode of April 15 and 16. On April 15, a leaked letter to CVE Board members from Yousry Barsoum, VP and Director of MITRE, revealed that the next day “the current contracting pathway for MITRE to develop, operate and modernize CVE and several other related programs…will expire.”

This led to a virtual firestorm in the software security community, since there is currently no replacement worldwide for the CVE Program; shutting it down abruptly would inevitably disrupt software security efforts worldwide. However, the next day, CISA announced “Last night, CISA executed the option period on the contract to ensure there will be no lapse in critical CVE services.” Thus, it seems the cavalry arrived in time to save the fort.

Ms. Easterly wrote a week later:

Today, CISA's Deputy Executive Assistant Director for Cybersecurity's Matt Hartman released a statement committing to the sustainment and evolution of the CVE program, including "to fostering (sic) inclusivity, active participation, and meaningful collaboration between the private sector and international governments to deliver the requisite stability and innovation to the CVE Program." The statement also clarified that there was no actual funding issue but rather an "administrative issue" that was resolved prior to a contract lapse.

In stating that “there was no actual funding issue”, she obviously intended to give comfort to her readers. After all, “administrative issues” happen all the time and don’t kill whole programs, while funding issues do kill programs. Therefore, she’s saying the worldwide alarm caused by Mr. Barsoum’s letter was misplaced. Move along…nothing to see here.

Unfortunately, this begs the question why Mr. Barsoum put out his letter and sent it to all the members (20+) of the CVE.org board, if the issue was so trivial. Why didn’t he just pick up the phone, find out what the “administrative issue” was and get it fixed? It also ignores the fact that a) many programs in the federal government have been literally cancelled overnight with no advance warning at all recently, and b) CISA is known to be in the process of letting a number of employees go, which almost always means some programs will need to be sacrificed as well.

In other words, Mr. Hartman’s assertion, and Ms. Easterly’s repetition of it, missed the main lesson of this whole sorry affair. To provide some background, the CVE Program was started from nothing in 1999. From the beginning, it was run by MITRE (in fact the idea for CVE first appeared in a paper written by two MITRE staff members that year), although it wasn’t called the “CVE Program” then. Since MITRE was already a US government contractor, it made sense for the government to engage MITRE to run the program. It can truly be said that the CVE Program might not exist at all today, were it not funded by the US government.

However, things change. Today, both governments and private industry worldwide are concerned about software vulnerabilities and rely on the CVE Program to faithfully identify and catalog those vulnerabilities. Given the worldwide use of CVE data, there is no reason why the US government should remain the sole funder of the program.

Yet that is exactly what Ms. Easterly advocates in the remainder of her post. She says, “Some parts of cybersecurity can and should be commercialized. Some should be supported by nonprofits. But vulnerability enumeration, the foundation of shared situational awareness, should be treated as a public good. This effort should be funded by the government and governed by independent stakeholders who are a balanced representation of the ecosystem, with government and industry members. CISA leading this effort as a public-private partnership assures the program is operated in service of the public interest.”

In other words, she thinks the private sector shouldn’t be funding the CVE Program, since it’s a public good that should only be funded by the public – i.e., the government (and CISA in particular). That would be wonderful if we lived in a world where the government is quite willing to fund cybersecurity initiatives and always stands behind their commitments. However, the likelihood that the CVE Program was almost shut down because – and I’m not going too far out on a limb in saying this – somebody who has no idea what it is decided it was a good candidate for defunding is in my mind prima facie evidence that its entire funding should not come from the US or any other government. 

To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome, but I will treat any donation of $25 or more as an annual subscription fee. Thanks!

But let’s suppose Mr. Hartman was correct in asserting there was no “funding issue”. In my (reasoned) opinion, that makes the case against exclusive government funding even stronger. Mr. Barsoum was clearly concerned that the CVE Program would be shut down, which strongly implies he knew the reason that might happen. If the reason was simply an administrative error – e.g., somebody forgot to check a box on some form – this means we’ll need to start worrying not only about funding cutoffs to the program, but about any administrative error that anybody at CISA, DHS, etc. might make. Does that give you a warm and fuzzy feeling?

I’m sorry, Ms. Easterly, but the CVE Program needs to be moved away from the federal government, although I hope the feds will still provide some of its funding. This doesn’t have to happen tomorrow, but it should at least be done when the contract has to be renewed next March; this is especially important since it’s quite likely the contract won’t be renewed then, anyway. If the software security community gets caught flat-footed again next year, we will have nobody to blame but ourselves. Tragedy repeated is farce.

Fortunately, the cavalry is already onsite and is planning for that eventuality. I’m referring to the CVE Foundation, a group that was already holding informal discussions before April 15, but which had not been formalized before then. When I saw the first announcement of it on April 16 – the announcement only had one name on it – I thought it might be a late April Fool’s Day prank. But the following week, it became clear that they have a great lineup of heavy hitters currently involved in the program, including CVE.org board members, heads of CVE working groups, and representatives of private industry.

Last week, this became even clearer, when I heard Pete Allor of Red Hat, CVE Board member and Co-Chair of the CVE Vulnerability Conference and Events Working Group, describe the success the CVE Foundation has had so far. They’ve lined up large companies and governments who have said they will be ready with funding when it comes time to make the break with Uncle Sam (although I certainly hope my dear Uncle will get over his hurt feelings and realize that a child leaving home because they have outgrown the need for incubation is an occasion for rejoicing, not barely concealed anger. After all, DNS was nurtured by the National Technology and Information Administration - NTIA - 40-50 years ago. When it was time to let DNS leave home, it found a truly international home in ICANN. At last report, DNS is still alive and well 😊. Perhaps CVE will find a similar home).

Fortunately, you don’t have to take my word for what Pete said. Last Thursday, Patrick Garrity of VulnCheck posted a link to an excellent podcast in which Pete went into a lot of detail on why the CVE Foundation was…well, founded, and the success they have had so far in lining up support (although he didn’t name potential financial supporters, of course). Then on Friday, Pete elaborated on what he’d said, under withering questioning by me and others at our regularly scheduled OWASP SBOM Forum meeting.

So, you don’t need to worry about whether the CVE Program will survive more than 11 months longer; the answer is yes. The real question is what changes need to be made to the program, both in the intermediate term and the longer term. Those will be interesting discussions, and I’m already trying to spark them. Stay tuned to this channel!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. And while you’re at it, please donate as well!

 

Friday, May 2, 2025

NERC CIP in the cloud: What are the real risks?

In a recent post, I described a document that was recently emailed to the “Plus list” for the NERC Standards Drafting Team (SDT) that is working on removing the barriers to full use of the cloud by NERC entities subject to CIP compliance. The well-written document, which has no official status, is a “discussion draft” of a new standard: CIP-016.

Like NERC Reliability Standards, the document includes a set of suggested requirements. Each suggested requirement loosely refers to one of the CIP standards. The author of this document assumes that CIP-016 will just apply to systems deployed in the cloud. On-premises systems will continue to be required to comply with standards CIP-002 through CIP-014, but those standards will now be understood not to apply to use of the cloud by those systems.

The suggested requirement that refers to CIP-013 reads, “The Responsible Entity shall perform risk assessments of cloud providers...This includes ensuring that all cloud providers comply with relevant security standards (e.g., SOC 2, FedRAMP).”

In other words, to comply with this suggested requirement, the NERC entity will need to:

1.      Perform a risk assessment of each cloud (service) provider, which presumably includes their Platform CSP (e.g., AWS or Azure); and

2.      “Ensure” that they comply with “security standards” like SOC 2 and FedRAMP. Neither of those is a security standard, so I’ll take the liberty of replacing those two names with “ISO 27001”, which definitely is a security standard.

In fact, these two “sub-requirements” are the same. This is because a risk assessment always needs to ascertain the subject’s compliance with a certain grouping of risks. In some cases, that grouping is called a standard; in others, it’s called a framework. Let’s say the NERC entity decides to assess the CSP based on ISO 27001. How are they going to do this?

One way for a NERC entity to assess a CSP based on ISO 27001 is to conduct a full audit; of course, the audit would need to (at least in principle) cover all the CSP’s data centers and systems. Is it likely that AWS or Azure would allow every NERC CIP customer to do this on their own, or that those customers, no matter how large, would have the resources to conduct this audit? Of course not.

The only realistic way for a NERC entity to perform a risk assessment of a CSP, based on ISO 27001 or any other true security standard, is to review the audit report and identify risks revealed in the report. For example, if the report noted a weakness in the CSP’s interactive remote access system, that would be one risk for the entity to make note of.

Since I believe CSPs will usually let customers see their cybersecurity audit reports, this would be a good way for NERC entities to assess their CSPs. However, given that there are only a small number of platform CSPs, why should each customer of “CSP A” have to request the same audit report, review it, and presumably identify a similar set of risks? Instead, why not have NERC itself – or perhaps a third party acting on NERC’s behalf – perform their own assessment of the CSP, then share the results with every NERC entity that utilizes the CSP’s services?

A word from our sponsor: To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please donate here. Any amount is welcome.

Of course, NERC wouldn’t be acting as a gatekeeper, determining whether the CSP is secure enough to merit designation as a “NERC authorized cloud provider” for entities subject to CIP compliance. Instead, it would be performing a service on behalf of many separate NERC entities. More importantly, since the CSP will know that they only need to be assessed once rather than once for every NERC CIP customer they have, they may be more open to having the assessors go beyond just an examination of the audit report.

That is, the CSP may be willing to have NERC ask them questions that are relevant to cloud providers, but are most likely not included in ISO 27001. For example, the Capital One breach in 2019 was due in part to the fact that many customers of one of the major platform CSPs had all made the same mistake when securing their environments in that CSP’s cloud. One of the CSP’s technical staff members, who had been terminated by the CSP, took revenge by breaking into – according to boasts she posted online – over 30 customers who had made the same mistake.

Of course, the fact that so many customers had made the same mistake should be taken as evidence that the CSP needed to beef up their cloud security training for their customers. Thus, one question that the NERC assessors could ask is what security training is provided to all customers at no additional cost, rather than simply being available for a fee. This question is almost certainly not included in an ISO 27001 audit.

Thus, I’m proposing that, in the new “cloud CIP” standard(s) that will be developed, NERC should be tasked with assessing cloud service providers in two ways: by reviewing their ISO 27001 audit report and by asking them questions that are most likely not asked in a normal assessment based on ISO 27001 (the current SDT should start thinking about what these questions should be).

NERC will review the audit report and the CSP’s answers to the cloud-specific questions, to identify risks that apply to this CSP; they will then pass those results to NERC entities that utilize the CSP’s services. NERC will not make any judgment on whether NERC entities can utilize the CSP’s services, or on measures that a NERC entity should take to mitigate the identified risks.

Of course, my suggestions above suffer from one little problem: NERC’s current Rules of Procedure (RoP) would never allow NERC (or even a third party engaged by NERC) to assess a CSP and share the assessment results with NERC entities. As I stated in the post I referred to earlier, I believe that accommodating use of the cloud by all NERC entities that wish to do so will require changes to the RoP – even though doing so may require an additional 1-2 years, beyond what just redrafting the CIP standards would require. This is just one example of that.

If you have comments on this post, please email me at tom@tomalrich.com. And don’t forget to donate!

Wednesday, April 30, 2025

The version range snafu


It’s no exaggeration to say that the CVE Program’s recent near-death experience has set off a flurry of activity in planning for the future of vulnerability identification programs (like CVE) and vulnerability databases (like the NVD, as well as many others). In this recent post, I described three different approaches that different groups are taking toward this goal today. Of course, none of those approaches is better than the others; they’re all necessary.

The approach I prefer to take – partly because I don’t see anyone else taking it now – is to focus on improvements to the CVE Program that can be made by next March, when the MITRE contract to run the program will come up for renewal again. Since I lead the OWASP Vulnerability Database Working Group, we are taking this approach. Instead of focusing on what’s best for the long term (which is what the broader OWASP group is doing), we’re focusing on specific improvements to the program that can be realized by next March, all of which are necessary and many of which have been discussed for a long time.

Perhaps the most important of those improvements is version ranges in software identifiers. Software vulnerabilities are seldom found in a single version of a product. Instead, a vulnerability is first present in version 1.3 and continues to be present (unknown to the developer) in all versions through version 2.2. 

Someone (often the developer itself, or a security researcher looking for a bug bounty from the developer) identifies the vulnerability in v2.2 and the developer patches it. Version 2.3 no longer includes the vulnerability, since it includes the patch. Subsequently, the developer realizes the vulnerability was first present in v1.3, so the vulnerable version range starts with v1.3 and ends with v2.2.

For this reason, many CVE Records identify a range of versions, rather than just a single version or multiple distinct versions, as vulnerable to the CVE; however, this identification is made in the text of the record, not in the machine-readable CPE identifier that may or may not be included in the record.

This omission isn’t the fault of CPE, since CPE provides the capability to identify a version range as vulnerable, not just a single version. However, this capability is not used very often, for the simple reason that there is little or no tooling available that allows end users to take advantage of a version range included in a CPE name. The same goes for the purl identifier, which is widely used in open source vulnerability databases. Even though purl in theory supports the VERS specification of version ranges, in practice it is seldom used, due to the lack of end user tooling.

Why is there very little (or even no) end user tooling that can take advantage of version ranges in software identifiers found in vulnerability records? I learned the answer to that question when I asked vulnerability management professionals what advantage having such a capability in their tools would provide to end users (i.e., what the use case is for machine-readable descriptions of version ranges).

When I have asked this question, few if any of these professionals have even been able to describe what that advantage would be. It seems clear to me that, if few people can even articulate why a particular capability is required, tool developers are unlikely to try to include that capability in their products.

However, I can at least articulate how an end user organization could utilize version ranges included in a vulnerability notification like a CVE record: They will use it when a) a vulnerability has been identified in a range of versions of Product ABC, and b) the organization utilizes one or more versions of ABC and wants to know whether the version(s) they use is vulnerable to the CVE described in the notification.

Of course, in many or even most cases, the answer to this question is easily obtained. For example, if the product ABC version range included in the record for CVE-2025-12345 is 2.2 to 3.4 and the organization uses version 2.5, there’s no question that it falls within the range. But how about when the version in question is

1.      Version 2.5a?

2.      Version 3.1.1?

3.      Version 3.41?

More generally, the question is, “Of all the instances of Product ABC running in our organization, which ones are vulnerable to CVE-2025-12345?” Ideally, an automated tool would a) interpret a version range described in a CPE found in the CVE record, b) compare that interpretation with every instance of ABC found on the organization’s network, and c) quickly determine which instances are vulnerable and which are not.

How can the inherently ambiguous position of the three version strings listed above be resolved? The supplier of the product needs to follow a specific “ordering rule” when they assign version numbers to products; moreover, they need to inform their customers – as well as other organizations that need to know this – what that rule is. The portion of the rule that applies to each of the above strings might be

1.      “A version string that includes a number, but not a letter, precedes a string that includes the same number but includes a letter as well.”

2.      “The value of the first two digits in the version string determines whether that string precedes or follows any other string(s).”

3.      “The precedence of the version string is always determined by the value of the string itself.”

Of course, for an end user tool to properly interpret each version range, it would need access to the supplier’s ordering rule. If these were sufficiently standardized, rather than always being custom created, it might be possible to create a tool that would always properly interpret a version range.[i] However, they are not standardized now.

This means that the developer of an end user tool that can answer the question whether a particular version falls within a range will need to coordinate with the supplier of every product that might be scanned or otherwise addressed by their tool, to make sure they have the most recent version of their ordering rule; and they’ll have to receive every updated version of that rule. Doing this would be a nightmare and is therefore not likely to happen.

This would be much less of a nightmare if the ordering rules were standardized, along with the process by which they’re created and updated by suppliers, as well as utilized by end users and their service providers. However, that will require a lot of work and coordination. It’s not likely to happen very soon.

Ironically, all the progress that has been made in version range specification has been on the supplier side. A lot of work has gone into making sure that CPEs and purls (and other products like SBOM formats) are able to specify version ranges in a manner that is easily understandable by human users. However, that progress is mostly for naught, given that the required tooling on the end user side is probably years away, due to the current lack of standards for creating and utilizing ordering rules.

Unfortunately, I have to say it’s probably a waste of time to spend too much time today on specifying version ranges on the supplier end. The best way to get version ranges moving is probably to get a group together to develop specs for ordering rules.

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] If you are a fan of “semantic versioning” – a versioning scheme often used in open source projects - you might think that “ordering rules” are a primitive workaround. After all, if all software suppliers followed semantic versioning, they would all be in effect following the same ordering rule. However, commercial suppliers often decide that semantic versioning is too restricting, since if only allows a fixed number of versions between two endpoint versions.

Often, a commercial supplier will want to identify patched versions, upgrade versions, or even build numbers as separate versions. Semantic versioning provides three fields - X, Y and Z - in the version string “X.Y.Z”; moreover, the three fields have different meanings (major, minor and patch respectively), so one field can’t “overflow” into its neighbor. While open source projects may not find these three fields to be too limiting, commercial suppliers sometimes want more than three fields.

Tuesday, April 29, 2025

I need your help!

Since I started this blog in 2013, I’ve never asked for donations to support my work. However, because of a recent financial change, I’m now doing exactly that. I’m not looking for large donations - just a lot of smaller ones! But large or small, all donations are welcome. Please read this and consider donating. 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Saturday, April 26, 2025

NERC CIP: We’re as far from the cloud as ever

This past Wednesday, the NERC “Risk Management for Third-Party Cloud Services” Standards Drafting Team (SDT) emailed a document to the “Plus List”, which seems to be a starting point for discussions of a new CIP-016 standard to address the problems with use of the cloud by NERC entities.

While I admit I have not been able to attend any of the SDT meetings for months, and while I also appreciate that the team[i] is anxious to create something – something! – that moves them forward, I regret to say I don’t think this document moves them forward at all. Here are the main reasons why I say that.

The primary problem is that the draft standard is written like a NIST framework. That is, it seems to assume that the NERC auditors will audit its requirements in the same way that a federal agency audits itself for compliance with NIST 800-53. For example, control AC-1 in 800-53 reads:

The organization:

a. Develops, documents, and disseminates to [Assignment: organization-defined personnel or roles]:

1. An access control policy that addresses purpose, scope, roles, responsibilities, management commitment, coordination among organizational entities, and compliance; and

2. Procedures to facilitate the implementation of the access control policy and associated access controls; and

b. Reviews and updates the current: 1. Access control policy [Assignment: organization-defined frequency]; and 2. Access control procedures [Assignment: organization-defined frequency].

This requirement assumes that:

       i.          It is generally clear to both the auditor and auditee what an access control policy should contain. More specifically, it is clear to both parties what “purpose, scope, roles, responsibilities, management commitment, coordination among organizational entities, and compliance” should be addressed by the policy.

      ii.          Both auditor and auditee generally understand what procedures are required to “facilitate the implementation of the access control policy and associated access controls.”

     iii.          Auditor and auditee generally agree on what constitutes an adequate “review and update” of access control policies and procedures. For example, the auditor isn’t expecting the auditee to rewrite the policy from the ground up, and the auditee isn’t expecting to get away with skimming through the policy and just giving it their stamp of approval.

As far as most federal government agencies are concerned, the above three assumptions may well be valid. However, I strongly doubt they’re valid for NERC entities, who usually take the “Trust in Allah, but tie your camel” approach to dealing with auditors. Specifically, I know that one reason some of the NERC CIP requirements are very prescriptive is that NERC entities are afraid of requirements that give the auditors leeway in determining what a requirement means. Moreover, the auditors often share this fear, since they don’t want to be blamed for misinterpreting CIP requirements. Therefore, they usually want CIP requirements that constrain them enough that there can’t be much dispute over how a requirement should be interpreted.

However, while keeping in mind that this document is just a discussion draft and will never get beyond that stage, it’s important to note how it will likely result in many auditing controversies if it were to become a standard. Here are three examples:

1. There are many statements that are clearly open to big differences in interpretation. For example, Section 2.2 Scope reads, “CIP-016 applies to any systems, applications, or data stored, processed, or transmitted in cloud environments or hybrid cloud infrastructures. Systems that remain fully on-premise are not subject to this standard.”

Don’t “systems that remain fully on-premise(s)” often use “applications, or data stored, processed, or transmitted in cloud environments or hybrid cloud infrastructures”? If that use isn’t subject to the standard, then what is? Yet, by saying that on-prem systems aren’t subject to complying with CIP-016, it sounds like they’re immune to threats that come through use of the cloud.

Is it really true that only systems that are themselves located in the cloud (which today includes 0 high or medium impact systems) are affected by cloud-based threats? If so, that seems like a great argument for permanently prohibiting BES Cyber Systems, EACMS and PACS from being located in the cloud. Of course, that’s exactly the situation we have today. Why bother with changing the standards at all, since today they effectively prohibit use of the cloud by entities with high and medium impact systems?

2. The draft standard relies heavily on 20-25 new terms, each of which would have to be debated and voted on - then approved by FERC - before the standard could be enforced. I remember the heated debates over the relatively small number of new terms introduced with CIP version 5, especially the words “programmable” in “Cyber Asset” and “routable” in “External Routable Connectivity”. The debates over these two words were probably more heated than the debates over all the v5 requirements put together. Moreover, the debates over those two words literally went on for years; they were never resolved with a new definition.

The lesson of that experience is that it doesn’t help to “clarify” a requirement by introducing new Glossary terms, unless those terms are already widely understood. This is especially the case when a new Glossary term itself introduces new terms. For example, the undefined new term “Cloud Perimeter Control Plane” in the draft CIP-016 includes another undefined new term, “virtual security perimeter”. Both terms will need to be debated and balloted multiple times, should they be included in an actual draft of CIP-016.[ii]

3. One interesting requirement is R12, which is described as “CIP-013 equivalent”. It reads:

The Responsible Entity shall perform risk assessments of cloud providers and ensure that supply chain risks, including third-party vendors and subcontractors, are mitigated. This includes ensuring that all cloud providers comply with relevant security standards (e.g., SOC 2, FedRAMP).

My first reaction is that this is going to require the CSP to have a huge amount of involvement with each Responsible Entity customer. This includes:

       i.          Sharing information on their vendors and subcontractors, so the RE can “ensure” (a dangerous word to include in a mandatory requirement!) that those risks have been “mitigated”. How will the RE do this? Surely not by auditing each of the CSP’s thousands of vendors and subcontractors!

      ii.          Providing the RE with enough information that they can “ensure” the CSP complies with relevant security standards. Of course, the CSP should already have evidence of “compliance” with SOC2 and FedRAMP – although neither of those is a standard subject to compliance (a better example would be ISO 27001).

     iii.          However, the words “all cloud providers” will normally include more than the platform CSP (e.g., AWS or Azure). They also include any entity that provides services in the cloud – for example, SaaS providers, security service providers, etc. Is the Responsible Entity really going to have to ensure that each of these cloud providers “complies” with SOC 2 and FedRAMP, to say nothing of other “relevant security standards?”

Of course, this document is just meant to be the start of a discussion, so it would be unfair to treat it as if it were a draft of a proposed new standard. However, I think there is one overarching lesson to be taken away from this (which I have pointed out multiple times before): Any attempt to address the cloud in one or more NERC CIP standards is inevitably going to require changes to how the standards are audited. These changes will in turn require changes to the NERC Rules of Procedure and especially CMEP (the Compliance Monitoring and Enforcement Program).

Because of this, any draft of a new CIP standard(s) to address use of the cloud needs to include a discussion of what changes to the Rules of Procedure (RoP) and CMEP are required for the new requirements to be auditable. The primary RoP change that will be needed – and it has been needed for years – is a description of how risk-based requirements can be audited[iii]. There is no way that non-risk-based CIP requirements will ever work in the cloud.

Moreover, the process of making the RoP changes needs to get underway as soon as possible after the new standard(s) is drafted. RoP changes rarely happen, but it’s likely these changes will take at least a couple of years by themselves. Since I’m already saying that the CIP changes alone won’t come into effect before 2031 and since it’s possible the RoP changes will not start until the CIP changes have been approved by FERC, this means it might be 2032 or even 2033 before the entire package of both CIP and RoP changes is in place. Wouldn’t that be depressing?

It certainly would be depressing, but I’ll point out that it’s not likely the NERC CIP community will need to wait until 2033, 2032, or even 2031 for new “cloud CIP” standards to be in place. It’s possible they’ll come sooner than that, mainly because NERC could be forced to take a shortcut. There’s at least one “In case of fire, break glass” provision in the RoP, which allows NERC – at the direction of the Board of Trustees – to accelerate the standards development process, in the case where the lack of a standard threatens to damage BES reliability itself.

Needless to say, this provision has never been invoked (at least not regarding the CIP standards). However, the time when it’s needed may be fast approaching. See this post. 

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you are involved with NERC CIP compliance and would like to discuss issues related to “cloud CIP”, please email me at tom@tomalrich.com.


[i] The email made it clear that this is primarily the product of one team member, so it has no official status.

[ii] Of course, there’s no assurance now that the new “cloud CIP” standards will include a CIP-016 that looks anything like this one.

[iii] NERC’s term for this is “objectives-based”. They are basically equivalent.

Friday, April 25, 2025

Maybe it’s not so bad after all


Earlier this week, I wrote a post that pointed to the strong likelihood that the MITRE contract to run the CVE Program will not be renewed next March (even though it was renewed last week, despite an initial announcement that it might not be); I called for planning to start now to figure out what can replace it. I did this on the assumption that there is no group doing that already. However, it turns out I was wrong.

A friend emailed me yesterday to ask if I knew about the CVE Foundation. The initial news reports about the contract being terminated pointed to this group as one that said they would be able to take over if the termination actually happened. However, since the reports only named one individual (the reports said there were other people involved, but they weren’t ready to share their names), I didn’t know how much credence to put in their assertion.

It turns out that I should have kept following the story. Yesterday, the same friend pointed me to the list of names now found in the FAQ. Some of the most important members of the CVE Program are shown as participants (the three leading private industry representatives on the CVE Board are listed as officers of the corporation); this is clearly a serious organization.

In addition, my friend said the group has good reason to believe that, should the MITRE contract not be renewed next March, the necessary funding will be there for them to fund the program (including MITRE, of course) on their own.

Of course, this is good news. I’ll also add that there are still a lot of questions that should be answered to make the CVE Program better. However, in asking and answering those questions, we at least won’t have to worry about the CVE Program disappearing beneath our feet. 

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. background on the book and the link to order it, see this post.

 

Thursday, April 24, 2025

Meanwhile, back at the NVD

While the big news in the vulnerability management world last week was the near death of the CVE Program, this temporarily overshadowed the ongoing saga of the National Vulnerability Database (NVD). Since February 12, 2024, the NVD has stopped reliably performing one of its most important functions: adding CPE names (machine readable software identifiers) to new CVE (vulnerability) records. For a discussion of why having a CPE name with every CVE Record is so important, see this post.

At the end of December, in the post I just linked, I estimated that the NVD’s backlog of CVE records without CPE names was around 22,000, or 55% of the approximately 40,000 new CVE Records created in 2024. In my most recent post on the NVD’s problems written on March 19, I admitted I couldn’t estimate the backlog, although I noted that the “vulnerability historian” Brian Martin thought the NVD had completely stopped creating new CPE names altogether.

Brian has kept following the NVD (which he says has “returned”). Last week, he put up this post on LinkedIn. It illustrates how the NVD has been doing its best to disguise the huge backlog of “unenriched” CVE Records (i.e., those that have not had CPE names and CVSS scores added to them – both of which are NVD functions). Without going into details, Brian said the backlog of unenriched CVEs (since early 2024) was now 33,699. So, far from making progress getting rid of the backlog in 2025, the NVD has dug the hole deeper.

Of course, the backlog number would be more meaningful if it’s expressed as a percentage of new CVE Records published since early 2024. Since the full year 2024 number of new records was about 40,000 and we recently finished the first quarter of 2025, I estimate there have been 50,000 new CVEs published since early 2024. This means that the 33,699 backlog constitutes 67% of the new CVE Records published since the NVD started having their problems last February 12.

In other words, the backlog as a percentage of new CVE records has grown by 12%. This obviously discredits the NVD’s preferred excuse for their problems: The volume of new CVE records has jumped and they’re struggling to keep up with it. That might explain the growth in the backlog itself, but it doesn’t explain a significant increase (in just 3 months!) in the percentage of CVE records that are unenriched (i.e., are in the backlog).

So what’s the NVD’s plan for finally eliminating this backlog? The last time they said anything about this was March 19, when they commented on their website:

We are currently processing incoming CVEs at roughly the rate we had sustained prior to the processing slowdown in spring and early summer of 2024. However, CVE submissions increased 32 percent in 2024, and that prior processing rate is no longer sufficient to keep up with incoming submissions. As a result, the backlog is still growing.

We anticipate that the rate of submissions will continue to increase in 2025. The fact that vulnerabilities are increasing means that the NVD is more important than ever in protecting our nation’s infrastructure. However, it also points to increasing challenges ahead.

To address these challenges, we are working to increase efficiency by improving our internal processes, and we are exploring the use of machine learning to automate certain processing tasks.

There’s one phrase in this statement that I strongly agree with: “the NVD is more important than ever in protecting our nation’s infrastructure.” That’s why this whole debacle is so appalling.

Don’t forget to donate! To produce these blog posts, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.