Monday, February 17, 2025

NERC CIP: What lessons can we learn from the failure (so far) of the new BCSI requirements?


I’ve mentioned many times that last year a new NERC Standards Drafting Team started the long process of developing new or revised CIP standards to make full use of the cloud completely “legal” for systems and information subject to NERC CIP compliance. This is the second SDT that has addressed the cloud problem, in whole or in part. In 2019, a similar team started meeting to draft changes to multiple CIP standards that would enable BES Cyber System Information (BCSI) to be stored in the cloud, as long as it was appropriately protected and “securely handled”.

I wrote about that drafting team in this post in 2020. The team ultimately produced two revised CIP standards, CIP-004-7 and CIP-011-3; these came into effect on January 1, 2024. When I wrote the post in 2020 and when the standards came into effect, I thought they were perfect for what they were intended to do: require logical (as opposed to physical) protection for BCSI stored in the cloud. This change was needed to make it possible for a NERC entity to store BCSI in the cloud without falling out of compliance with the CIP standards.

That drafting team was constituted to solve a problem caused by a requirement part (in the then-current CIP-004-6) that mandated physical protection for servers in which BCSI might be stored, if these servers were outside the NERC entity’s Physical Security Perimeter (PSP). Of course, any requirement to protect information, by applying special physical protection for individual devices that it’s stored on, won’t work in the cloud. In the cloud, information moves constantly from server to server and data center to data center; this is required by the cloud business model.

When that drafting team first met, there wasn’t much disagreement about what they needed to do: remove the requirement for special physical protection of BCSI stored outside of a PSP and replace it with a requirement that allowed BCSI to be logically protected instead. In other words, the revised CIP standards would allow NERC entities to protect BCSI at rest or in transit by encrypting it (other methods of protecting the data are permitted, but it’s likely that encryption will almost always be the option of choice). If someone can access the encrypted data but they don’t have the keys needed to decrypt it, they obviously don’t really have “access” to the data.

It seemed to me that CIP-004-7 and CIP-011-3 were just what the doctor ordered. Therefore, starting in January 2024 I expected to see lots of BCSI moving to the cloud. However, it turns out that the drafting team – and a lot of other people like me – didn’t recognize that merely storing BCSI in the cloud isn’t much of a use case. BCSI is never voluminous, so it can easily be stored within the on-premises ESP and PSP, where it’s very well protected and easily backed up. In itself, cloud storage of BCSI doesn’t solve a problem.

It turns out that the real problem was that the previous requirement for physically protecting BCSI prevented NERC entities from using SaaS that required access to BCSI - especially online configuration and network management applications. After all, SaaS applications never reside in a defined physical location in the cloud, any more than data does. Since BCSI had to be confined to particular physical locations with special protection, that meant cloud-based SaaS could never use BCSI without putting the NERC entity into a position of non-compliance, even if the BCSI was encrypted. This was unfortunate, since there wasn’t much argument that encryption provides a much higher level of data security than just preventing unauthorized physical access.

Thus, I expected there would be a big surge in use of SaaS applications that utilize BCSI when the two revised CIP standards came into effect on New Year’s Day 2024. Yet as far as I know - i.e., relying on statements made by NERC entities and SaaS providers – today (more than one year later) there is literally zero use of SaaS by NERC entities with high or medium impact CIP environments.

Why is this the case? The answer is simple: Ask yourself when you last saw guidance (or at least guidelines) from NERC or any of the NERC Regions on use of BCSI in the cloud…You’re right, no official guidance has yet been published since the two standards came into effect.[i] In fact, I’ve seen close to nothing written by non-official sources, other than my blog posts.

And NERC entities aren’t the only organizations that need guidance on complying with CIP-004-7 and CIP-011-3. The SaaS provider needs to furnish their NERC CIP customers with some simple compliance documentation; there’s been no official guidance on that, either.

However, the group that I feel sorriest for isn’t the NERC entities or the SaaS providers – it’s the drafting team members. How would you feel if you’d dedicated a good part of a couple years of your life to making some changes to the CIP standards, yet it turns out that those changes aren’t being used at all, more than one year after the changes came into effect? This shows that just drafting new or revised CIP standards and getting them approved by NERC and FERC isn’t enough. NERC entities need to clearly understand what they need to do to comply. Moreover, they also need to understand what compliance evidence to require from third parties – in this case, SaaS providers.

This is an especially good lesson for the members of the current CIP/cloud drafting team. They already have a long road ahead of them. If they reach the end of that road and find that the NERC community is rushing to make full use of the cloud, that will be a great feeling. On the other hand, if they come to the end of their road and realize that few NERC entities are even trying to use the cloud in their OT environments – because the new standards are too complicated or because nobody has made an effort to explain them to the community - how do you think they’ll feel? I know how I would feel…

If you are involved with NERC CIP compliance and would like to discuss issues related to “CIP in the cloud”, please email me at tom@tomalrich.com.


[i] NERC endorsed this existing document as “compliance guidance” in late December 2023. However, it wasn’t originally written to be compliance guidance, and its implications for compliance aren’t always clearly stated.

Thursday, February 13, 2025

Better living through purl!

 

The CVE Program is getting ready to consider adopting purl as an alternate software identifier to CPE in CVE Records. If this goes through, software users will be able to use purl to look up open source software products and components that are affected by CVEs. They will be able to do this in several major vulnerability databases, and perhaps later they will be able to do this in the NVD.

However, end users aren’t the only organizations that will benefit from purl being used in the “CVE ecosystem”; in fact, they’re not even the biggest beneficiaries. Here are what I believe are the most important groups who will benefit and why:

1. Software developers that utilize open source components in their products. Unlike CPE, purl currently focuses on just one type of software: open source software distributed by package managers. 90% of the code in most software products today, whether open source or proprietary, consists of open source components. Therefore, it’s important that developers be able to learn about vulnerabilities in those components.

To look up an open source component in a vulnerability database, the developer needs to know the identifier for the product. If the developer wants to use the National Vulnerability Database (NVD), they will first need to search for the CPE name using the CPE Search function in the NVD. However, finding the correct version of the component is challenging. For example, here is the search result for the popular Python product “django-anymail”. You will have to figure out which of the 34 CPE names is the one you need.

On the other hand, if the developer wants to learn the purl for django-anymail, they don’t need to look anything up in an external database. Instead, they just need to know three pieces of information, which they presumably already have:

1.      The purl type, which is based on the package index, PyPI;

2.      The package name, django-anymail; and

3.      The (optional) version string, e.g. 1.11.1.

Using these three pieces of information, the developer can easily create the purl: “pkg:pypi/django-anymail@1.11.1” (“pkg” is the prefix to every purl). Note that no database lookup was required!

2. CVE Numbering Authorities (CNAs). As described in this post, a CNA is an organization that reports software vulnerabilities to CVE.org in the form of CVE Records. Today, the only software identifier found in CVE Records is CPE. Unfortunately, more than half the CVE Records created last year – about 22,000 out of 39,000 – don’t contain a CPE and thus can’t be found with simple searches.

However, if purl were also supported in CVE Records[i], the CNA could create the purl for an open source product, exactly as the supplier in the earlier example would create it. Meanwhile, a user searching a vulnerability database for the product could create the same purl. Barring a mistake, the user should always be able to locate the same product and thus learn of any vulnerabilities reported for it.

CNAs that report vulnerabilities for open source software will benefit from two other features that are unique to purl:

a.      Every module in an open source library can have a purl, not just the library itself (as is the case with CPE). If a vulnerability is found in only one module of a library, it would be much better for the CNA to report the vulnerability as applicable just to that module. This is because the developer will often include in their product just the module(s) that is required, rather than the whole library. If the CNA reports just the module as vulnerable, a developer that didn’t include that module in their product can (perhaps proudly) announce to their customers that the vulnerabiy doesn’t affect their product and no customer needs to patch it.[ii]

b.      Similarly, if a product is found in multiple package managers but the CVE Record includes the purl for only one of them, this means there’s no need to patch the product in the other package managers. However, because CPE doesn’t have a field for a package manager, most users won’t learn of this. Again, this means there will often be wasted patching effort.

Another advantage that CNAs cite for purl is the fact that a purl doesn’t have to be created by any central authority. Every product available in a package manager (or another repository that has a purl “type”) already has a purl, even if nobody has written it down yet. By contrast, CPEs must be created by a NIST contractor that works for the NVD. They often take days to create (if they are created at all). This delays the CNA in submitting a new CVE Record.

3. End users. Of course, this means the ultimate consumers of vulnerability information, even if they receive it through some sort of service provider. Their primary concern is completeness of the data. That is, they need to know about all the vulnerabilities that affect the software products they use. Of course, there’s hardly any end user today that doesn’t have about a year’s backlog of patches to apply, so it’s not like they are breathlessly anticipating more vulnerabilities.

Any organization with such a backlog owes it to themselves to learn about every newly released vulnerability. They (or perhaps a service provider acting on their behalf) need to feed all those vulnerabilities into some algorithm that prioritizes patches to apply, based on a combination of a) attributes of those vulnerabilities like CVSS or EPSS score and b) attributes of the assets on which the affected software product resides, such as proximity to the internet or criticality for the business.

CVE is of course the most widely followed vulnerability identifier worldwide. However, just learning about a CVE doesn’t do an organization much good unless they also learn about a product or products that is affected by the vulnerability – and if the organization utilizes one of those products. Because of this, CVE Records always identify the affected product(s). When the CNA creates a new CVE Record, they do this in textual form.

After the CNA creates a new CVE Record, they submit it to the CVE.org database. From there, the NVD downloads all new CVE Records. Up until last year, NVD staff members (or contractors working for the NVD, which is part of NIST) added one or more CPE names to almost every CVE Record. They do (or did) this because, with over 250,000 CVE Records today, it’s impossible to learn the products affected by each CVE simply by browsing through the text included in the record. There needs to be a machine-readable identifier like CPE or purl to search on.

But here’s the problem (already referred to above): Almost exactly a year ago, the NVD drastically reduced the number of CVE Records into which they inserted CPE names, with the result that as of the end of the year, fewer than half of new CVE Records contained a CPE name. The problem seems to have continued into this year. A CVE Record that doesn’t include a machine-readable software identifier will be invisible to an automated search using a CPE name. In other words, if someone searches using the CPE name for a product they utilize, the search will miss any CVE Record that describes the same product in a text field, if the record doesn’t include the product’s CPE. Moreover, the person searching won’t have a way to learn about the missing CPE names.

Because of this problem, end users can’t count on being able to learn about all CVEs that affect one of their products. If purl were implemented as an alternative identifier in CVE Records, and if the CNA had included a purl in a CVE Record for the product, then a search using that purl would point out that the product was affected by the CVE. Implementing purl is needed for ensuring completeness of the vulnerability data, at least for open source software (and open source components in proprietary software).

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also email me If you would like to participate in the OWASP SBOM Forum or donate to it (through a directed donation to OWASP, a 501(c)(3) nonprofit organization).

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] Currently, the CVE Record Format includes a field for purl. However, it’s not being used at all today, mainly because there’s been no training or encouragement for the CNAs. That will hopefully change soon. 

[ii] In fact, this was the case with the log4j vulnerability CVE-2021-44228. It was reported as affecting the log4j library, but in fact it just affected the log4core module. Since a CPE for a library refers to the library itself, not one of the modules, this meant that in 2021-2022, a lot of time was wasted worldwide in patching every module in log4j.

Monday, February 10, 2025

How can NERC audit risk-based requirements?

The Risk Management for Third-Party Cloud Services Standards Drafting Team is off to a running start this year.  I say “start”, because during the 5-6 months that the SDT met last year, they didn’t even start to draft any standards. Instead, they were doing what NERC drafting teams sometimes do: re-think the Standards Authorization Request (SAR) that forms their “charter”. They spent the entire fall discussing what they are going to discuss when they start drafting the new (or revised) cloud standards.

There is nothing wrong with doing this, especially when the SDT has an especially weighty burden – and it’s hard to think of a NERC CIP SDT that’s had a weightier burden than this one has, except perhaps the team (called “CSO706” for Cyber Security Order 706) that drafted CIP versions 2,3,4 and 5. In fact, one member of that team, Jay Cribb, is a member of the cloud team. The CSO706 team first met in 2008. Their last “product”, CIP version 5 (essentially, the version that’s still in effect today), came into effect in 2016.

In my opinion, one essential attribute for any requirements they create is that they be risk-based. That’s my term, but NERC refers to them as “performance-based”. While some CIP requirements today are truly risk-based (even though they may not mention the word “risk”), others are not.

In fact, a small number of CIP requirements like CIP-007 R2 patch management and CIP-010 R1 configuration management are highly prescriptive, and require compliance on the physical or virtual device level. Cloud service providers don’t track systems based on the device on which they reside, since doing so would require breaking the cloud model. This means they will never be able to provide the evidence required for a NERC entity customer to prove compliance with these prescriptive requirements.

This is why I think all CIP requirements going forward, but especially requirements having to do with use of the cloud, need to be risk-based, and can’t refer (even implicitly) to devices at all. In fact, since CIP v5 came into effect in 2016, I believe that all subsequent CIP requirements and some entire standards, including CIP-012, CIP-013, CIP-014, CIP-003-2, CIP-010-4 and others, have been risk-based (some more than others, truth be told).

The problem with risk-based NERC CIP requirements today is there has been very little guidance to NERC entities or Regional auditors on how to comply with or audit risk-based CIP requirements. This was most vividly demonstrated in the Notice of Proposed Rulemaking (NOPR) that FERC issued in September regarding CIP-013, which is an entirely risk-based standard. In my post on the NOPR, I quoted the following section found near the end:

…we are concerned that the existing SCRM Reliability Standards lack a detailed and consistent approach for entities to develop adequate SCRM (supply chain risk management) plans related to the (1) identification of, (2) assessment of, and (3) response to supply chain risk.  Specifically, we are concerned that the SCRM Reliability Standards lack clear requirements for when responsible entities should perform risk assessments to identify risks and how those risk assessments should be conducted to properly assess risk.  Further, we are concerned that the Reliability Standards lack any requirement for an entity to respond to supply chain risks once identified and assessed, regardless of severity. 

In other words, FERC issued the NOPR because they do not think NERC did a good job of drafting either CIP-013-1 or CIP-013-2. They are considering ordering NERC to revise CIP-013 so it truly requires NERC entities to develop and implement a supply chain cyber risk management plan.

I agree with FERC’s opinion, but I want to point out that just asking NERC to re-draft CIP-013 will not necessarily fix the problem. This is because today NERC entities don’t know how to comply with a risk-based standard within the NERC auditing framework. It is also because most CIP auditors have limited experience in auditing risk-based requirements.

Rather than repeat this sorry story with the new cloud standards, it’s important that NERC and the Regions figure out how risk-based requirements can be audited.

What I call risk-based requirements are what NERC calls “objective-based” requirements. I used to think the two terms were synonymous, but I now realize they’re complementary. A requirement to achieve an objective inherently requires the entity to identify risks it faces; the entity must formulate a plan to assess those risks and mitigate them. Of course, “mitigate” doesn’t mean “eliminate”; it just means “make better”. Since no entity has unlimited resources available, a plan to mitigate risks will always leave some risk on the table; of course, that is called residual risk.

This will be easier to understand if we make up an example. Suppose a contractor has agreed to build a new building. The customer requires them to develop a plan to identify and mitigate the risks that could prevent them from finishing the building on time: inclement weather, materials shortages, etc.

The contractor lists all the risks, assesses the likelihood that each risk will be realized, and determines the impact (in this case, days of delay) if the risk is realized. For each risk, the contractor multiplies likelihood times impact and determines the expected delay if the risk is realized.

In this example, the objective is finishing the building on time, while the risks are the different possible causes of delay. So, “risk-based” and “objectives-based” always go hand in hand. If you’re required to achieve any objective, you always must mitigate risks, and if you need to mitigate risks, the only way you can identify them is to know the objective you’re trying to achieve. If there’s no objective, there’s no risk and vice versa.

In the case of CIP-013, the objective is to secure BES Cyber Systems (and also EACMS and PACS) by making sure the suppliers of those systems follow secure policies and practices. Of course, the risks have to do with suppliers not following secure policies and practices – for example, in software development or adequately vetting their employees.

However, what does CIP-013 require today? Only that NERC entities develop a plan to “identify and assess” supply chain cybersecurity risks to BCS, EACMS and PACS. There is no indication of what those risks are. R1.2 lists six specific controls that NERC entities must practice, but those were never intended to be the entirety of supply chain security risks. Rather, they were six items that FERC included at various random places in their 2016 order to develop a supply chain standard; the drafting team just decided to gather them in one place. However, far too many entities limited their CIP-013 programs to just those six controls and ignored the requirement to “identify and assess” risks altogether. This was one of the main reasons why FERC issued their NOPR last year.

Here's how I would rewrite CIP-013 (and I’ve been saying this for years): I wouldn’t require the entity to take specific actions (although it wouldn’t be the end of the world if R1.2.1 – R1.2.6 were allowed to remain in the standard). However, I would require that the plan address specific areas of risk. These can include secure software development practices, vetting employees for security risks, policies to ensure secure remote access to devices inside an ESP (i.e., not just what CIP-005 R2 requires), etc.

For each of those areas, the entity would need to identify and assess supply chain risks. If they say one of those areas doesn’t apply to them, they would need to explain why. For example an entity’s reason for not looking at risks from remote access might be that the entity only allows its own employees to access devices in their ESPs remotely.

For all the other areas of risk, the entity will need to identify a set of risks that they will address in their plan. For example, in the software security area of risk, individual risks include an insecure development environment, the supplier not reporting vulnerabilities when the software is being used by customers, etc.

How will this process be audited? It will come down to the judgment of the auditors that the entity did a good job of identifying and assessing risks in each area of risk. However, a lot of NERC entities are deathly afraid of having to rely on the judgment of their auditors. This isn’t because the auditors don’t exercise good judgment in general (e.g., they have lots of car accidents), but because NERC won’t ever take a stand on what a requirement means and provide true guidance.

If NERC did that, auditor judgment would become a non-issue, since both the entity and the auditor would rely on NERC’s guidance. A guidance document would list the major areas of supply chain cybersecurity risk, as well as the major risks in each of those areas. For each major risk, the entity would need to a) present a credible plan for mitigating that risk, or b) explain why the risk doesn’t apply in their case.

However, NERC doesn’t issue its own guidance on compliance, because to do so would amount to “auditing itself”. That is, if NERC tells the entity how to comply with a requirement, they would…what? Take all the fun out of compliance by making it a dull paint-by-numbers exercise? If the entity thereby achieves exactly what NERC wants them to achieve by following their guidance, why not encourage that behavior?

I’ve also heard other excuses for NERC’s policy, including that it’s included in GAGAS - although nobody has shown me where it says that. What this discussion usually comes down to is someone saying that the NERC Rules of Procedure (RoP) include an admonition against NERC developing its own guidance. Nobody has shown me that either, although I’ll admit I haven’t asked a lot of people about this.

But let’s stipulate that the RoP does prevent NERC from providing compliance guidance to NERC entities. Why would that be so terrible? After all, since NERC’s guidance will presumably reflect their view of what’s best for the Bulk Electric System, wouldn’t it be better for the BES if all NERC entities followed that guidance than if they didn’t? What is gained by withholding that information?

I think the problem here is that NERC has based their auditing on financial auditing, where it’s very important that auditors not offer guidance that dishonest financial institutions could distort to justify improper practices. However, cybersecurity is inherently a risk management exercise, in which one practice might mitigate one risk but not another; therefore, an auditor needs to exercise judgment regarding whether a particular control is effective in a particular situation. Finance isn’t that sort of exercise.

The moral of this story is that auditing risk-based requirements won’t work without the auditors being able to exercise judgment. Of course, the auditing rules (presumably in the RoP) will need to require that auditors distinguish between an entity that made an honest mistake in managing a risk and an entity that decided to ignore a risk entirely because they didn’t feel like bothering with it.

And this, boys and girls, is why I think the “cloud SDT” needs to be prepared for a very long ride. I think they have to deal with this problem of auditing risk-based requirements, which may require changes to the Rules of Procedure. If they don’t do that, they’ll most likely end up repeating the CIP-013 experience: creating a standard that, even if it’s initially approved by FERC, turns out not to be very effective and ends up requiring a complete rewrite.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


Monday, February 3, 2025

AI and Critical Infrastructure

The time isn’t far off when critical infrastructure (CI) industries, including the electric power industry, will face overwhelming pressure to start using AI to make operational decisions, just as AI is probably already being used to make decisions on the IT side of the house. Even the North American Electric Reliability Corporation (NERC), which drafts and audits compliance with the NERC CIP (Critical Infrastructure Protection) standards, acknowledged that fact in a very well-written document they released last fall.

However, while it’s certain there will be lots of pressure for this in all CI industries, it’s also certain this won’t happen without some sort of regulations being in place, either mandatory (as in the case of NERC CIP) or voluntary, as is likely in CI industries without mandatory cyber regulations in place, like manufacturing. My guess is those industries will develop their own regulations through industry bodies like the ISACs, since the manufacturers themselves are probably as afraid of the harm that aberrant LLMs could cause as everyone else is.

I used to think that AI security regulations for CI would need to be very much in the weeds, with restrictions on how the models can be trained, etc. However, I now realize that trying to do that will be a fool’s errand, since in fact there only need to be four rules:

1.      An AI model can never be allowed to make an operational decision on its own. It can only advise a human, not make the decision for them.

2.      The human can’t face a time limit, so that if they don’t decide to do something in X minutes, the model will decide for them.

3.      If the human doesn’t make the decision at all, the model can’t raise any objections. We don’t need humans succumbing to “peer pressure” from LLMs!

4.      The human can’t be constrained by policies to accept the model’s recommendation. The decision must be theirs alone, including the decision not to do anything for the time being.

Of course, you might be wondering about time-critical decisions, like the millisecond-level “decisions” that are sometimes required in power substations. Those decisions need to be made like they are today: by devices like electronic relays or programmable logic controllers that operate the old-fashioned way: deterministically.

Perhaps one day AI will be so reliable that it can be trusted to make even those decisions on its own. But that day is probably far in the future and may never come at all. Once AI can be as intelligent as the extinct nematode worm Caenorhabditis elegans, whose genes constitute 60% of the genome of humans and almost all other animals, I might be persuaded to change my mind. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also email me If you would like to participate in the OWASP SBOM Forum or donate to it (through a directed donation to OWASP, a 501(c)(3) nonprofit organization).

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Monday, January 27, 2025

How has the NVD’s 2025 start been? So far, abysmal.


As any reader of this blog knows, the National Vulnerability Database (NVD) fell on its face last year. Starting on February 12, the NVD drastically reduced the number of machine-readable “CPE names” it added to CVE Records (which originate in CVE.org, part of DHS). It ended the year having added a CPE name to fewer than 45% of the approximately 39,000 new CVE Records added in 2024. In previous years, the NVD added a CPE to virtually every new CVE Record, usually within a day or two of receiving it.

Why is this important? Because just learning that a newly discovered vulnerability has been found in one or more of the millions of software products used today doesn’t do you a lot of good if you don’t know which product it affects. When the CVE Record is created by a CNA, it includes a textual description of the product.

While that has some value, it won’t be picked up by an automated database search, which won’t display any CVE Record that doesn’t include a CPE name for the product being searched for. In other words, to learn for certain which (if any) vulnerabilities identified in 2024 or so far in 2025 are found in a software product you use, you would need to “manually” read the text in all 42,000-odd CVE Records that were added in 2024 or 2025. This is why vulnerability management requires automation. Since last February, truly automated vulnerability management has not been possible, at least not in the NVD.

But 2024 is (thankfully) behind us; how is the NVD doing in 2025? During the OWASP SBOM Forum’s meeting last Friday, Bruce Lowenthal, Senior Director of Product Security of Oracle, updated the group on this question by putting this information in the chat:

1.      “Only 27% of CVEs published in December 2024 have CPE assignments (in the NVD).”

2.      “Only 8% of CVEs published in January have CPE assignments in NVD.”

The first item isn’t terribly surprising, since we already knew the NVD almost completely stopped assigning CPE names for a couple of weeks in December. However, the second item was quite disappointing. There was a lot of hope (although not from me) that the NVD would not only start assigning a CPE to every CVE Record again soon, but also start working down the backlog. Instead, after adding over 2,000 to the backlog of CVE records without a CPE assigned in December, the NVD has surpassed that record by adding over 2,700 to the backlog this month. I suppose that’s progress, but it’s certainly not in the right direction.

In summary, in order to dig itself out of the hole it’s in, the NVD needs to

1.     Stop adding to the CPE backlog, by assigning a CPE name to each of the over 3,000 monthly new CVE Records that are likely to be created this year, and

2.      Start reducing the backlog by assigning additional CPEs at a rate that will reduce the backlog to zero by the end of 2025. Since I believe the backlog is around 23,000 as of today and since there are 11 months left in the year, this means the NVD will need to assign an additional 2,000 CPEs every month this year, for a total of 5,000 every month.[i] If the NVD does this, the backlog will be zero – and no longer growing – at the end of 2025. Of course, at the moment I’d assign the statement that this could actually happen to the realm of fantasy.

While the CPE identifier has always had a lot of problems, the problem I just described – that more than half of CVE records since last February contain no CPE name at all - is far worse than the others. A CVE Record that contains no CPE name is completely invisible to an automated search. This raises the question whether it’s worthwhile to perform any vulnerability search of the NVD today, since doing so may leave the user with a false sense of security.

Is there a solution to this problem? Yes, there is. What if I told you there is a software identifier that came from literally nowhere less than ten years ago to being today by far the dominant identifier in the open source world? A software identifier that’s used over 20 million times every day, by just one tool, to look up an open source vulnerability in the OSS Index open source vulnerability database?

Moreover, this identifier – called purl - doesn’t need to be “created” by any third party. As long as a purl is included in a CVE record, a user should always be able to write down a purl that will match the one in the record, utilizing a few pieces of information they should already have at hand or can readily look up (for open source software products, these are the package manager name, the name of the product in the package manager, and the version number in the package manager).

In other words, purl can be at least a co-equal identifier to CPE in the NVD and other vulnerability databases that are based on CVE. However, there are two important problems that need to be addressed before this can happen:

1.      CVE Records need to include purls, and the CVE Numbering Authorities (CNAs) need to see the value of including them in the records.

2.      Purl today doesn’t identify commercial software; it just identifies open source software. Fortunately, there is a workable proposal for fixing that problem.

I’m pleased to report that work on fixing the first problem will start soon, while work on the second problem will almost certainly start within six months. If you would like to get involved in one or both of these efforts, please let me know. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also email me If you would like to participate in the OWASP SBOM Forum or donate to it (through a directed donation to OWASP, a 501(c)(3) nonprofit organization).

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] Even this estimate is too low, since it doesn’t allow for any growth in the number of new CVE records in 2025 vs. 2024. That number increased by 11,000 in 2024 vs. 2023. It might not increase by that amount this year, but it would be naïve to believe it won’t increase at all this year. 

Wednesday, January 15, 2025

Should you be worried or happy when you find no vulnerabilities in your software?


One of the unfortunate byproducts of vulnerability management compliance requirements is that they tend to reward negative findings. That is, if you search a vulnerability database for products that you use and find that some product searches don’t yield any vulnerabilities at all, you might take this as evidence that the products are very secure and you must be compliant.

However, often that’s far from the truth. The negative finding can mean:

1.      The person who created the identifier for the product in the vulnerability database made a mistake or didn’t follow the specification. Because you searched using the correct identifier, the search failed.

2.      The product you’re searching for has been acquired by a different supplier, who enters new vulnerabilities under their name, not that of the previous supplier (from which you acquired the product). That search fails, too.

3.      The supplier name reported was for example “Microsoft, Inc.”, but it is recorded in the identifier as “Microsoft Inc.” Again, the search fails.

4.      A product has been entered in the vulnerability database using multiple names, e.g. “OpenSSL” vs “OpenSSL_Framework”. However, a vulnerability is seldom reported using multiple names, since it’s unlikely that the CVE Numbering Authority (CNA) that reported the vulnerability knew there were multiple names available. Thus, due to pure chance, one name might have no vulnerabilities reported against it, while the other may have a lot of vulnerabilities, even though they’re in fact the same product. If you happen to enter the name for the version with no vulnerabilities reported, your search will again fail.

5.      Even though the identifier in the database is correct, you fat-fingered the identifier in the search bar, so the search failed.

6.      The supplier of the software product has never reported a vulnerability for it. Instead of the negative finding being good news, it means the supplier cares very little for the security of their products or their customers. The great majority of software vulnerabilities are reported by the developer of the software or the manufacturer of the device. However, it’s certain that reported CVEs (about 260,000 have been reported since 1999) are a tiny fraction of all software vulnerabilities. Lack of vulnerability reporting is the biggest obstacle in the way of securing the software supply chain.

In the National Vulnerability Database (NVD), the above problems are compounded by the error message the user receives almost every time a search is unsuccessful: “There are 0 matching records”. Since this message also appears when a product truly doesn’t have any vulnerabilities reported against it, in all the above scenarios you might reasonably assume the product is vulnerability-free. In fact, human nature dictates that most of us will make that assumption. Of course, in most cases it’s probably not true.

What is the solution to these problems? I don’t know of any solution to all of them, but I can point out that the first four problems will not appear if the identifier used in the vulnerability database is purl (the product identifier used in almost all open source software databases) as opposed to CPE (the identifier used by the NVD and databases that are based on the NVD).

This is why the CVE Program (run by cve.org) is now looking at including purl as another identifier in what I call the “CVE ecosystem”. This includes all vulnerability databases that utilize CVE as the vulnerability identifier (there are many other vulnerability identifiers, mostly for open source software products. However, CVE is by far the major vulnerability identifier worldwide).

Of course, the most widely used vulnerability database in the CVE ecosystem is the NVD. When the CVE Program adopts purl as an alternative identifier, will purl suddenly be usable for searches in the NVD? No. To support purl, a vulnerability database that currently just supports CPE will need to make several changes. Given the problems that the NVD has been experiencing over the past 11 months, it isn’t likely they will be able to suddenly devote resources to making them.

However, other databases will be able to display CVEs that apply to a software product when the user enters a purl[i]. This means that at least some of the CVE Records published by CVE.org will be accessible to users and developers of open source software products.

It will be at least a couple of years before purl is fully supported in the CVE ecosystem. That might seem to be a long time, if it weren’t for the fact that six months ago I would have told you it’s unlikely that purl will be supported by CVE in this decade. Things are starting to move in the right direction.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] There are two large vulnerability databases that display CVEs that apply to an open source software product/version when the user enters the purl for that product/version. However, these databases don’t include the CVE Records published by the CVE Program, since those records currently don’t include purls. When that changes, those two databases will be able to accept new CVE Records, thus making those records searchable using purl.

Sunday, January 12, 2025

Here’s how we can chop a couple of years off the wait for the “Cloud CIP” standards

 

I have posted recently on the need to rewrite two NERC CIP requirements: CIP-007 Requirement R2 (patch management) and CIP-010 Requirement R1 (configuration management). The primary reason that both requirements need to be rewritten is that they are by far the most prescriptive CIP requirements. In fact, since CIP version 5 (when both these requirements were substantially revised) came into effect in 2016, I have heard that complying with just these two requirements accounts for a substantial percentage of all NERC compliance costs, not just NERC CIP compliance costs.

However, the second reason why these two requirements need to be rewritten is that they are currently the two biggest barriers to use of the cloud by NERC entities with medium or high impact BES environments. The main reason for this is that the two requirements apply on the level of individual BES Cyber Assets, even though they’re written to apply to BES Cyber Systems (BCS). This means that a cloud service provider would have to produce documentation for the NERC entity that showed the CSP had taken every required step in CIP-007 R2 and CIP-010 R1 for every device on which any part of the BCS resided during the audit period.

One of the main reasons why use of the cloud is so inexpensive is that systems (i.e., the software and data in systems) can be moved from server to server and datacenter to datacenter whenever it’s advantageous to do so. It would be hugely expensive if a CSP were required to provide that information, and it’s doubtful that any CSP would even entertain the idea of doing that. None of the other CIP requirements require providing documentation at anywhere near that level of detail.

Fortunately, both the prescriptiveness problem and the cloud documentation problem can be cured with the same medicine: rewriting CIP-007 R2 and CIP-010 R1 to make them “objectives-based” (that is NERC’s term, although mine is “risk-based”. They mean effectively the same thing). When will that happen?

Last summer, a new NERC Standards Drafting Team started working on what will undoubtedly be a huge multi-year project to revise (and/or add to) the existing NERC CIP standards to make them “cloud-friendly”. They haven’t worked out their agenda yet, but I recently estimated that the new and/or revised standards will be fully approved and enforced around 2031. This is based on the experience with CIP version 5, which took almost that long and which in some ways was easier to draft than “cloud CIP” will be.

However, one thing is certain about the SDT’s agenda: it will include rewriting CIP-007 R2 and CIP-010 R1. Given how controversial both requirements are, and the fact that CIP-007 R2 needs to be rewritten as a vulnerability management, not a patch management, requirement, I think just rewriting and balloting those two requirements will take 1 ½ to 2 years. While this work will undoubtedly require some coordination with the “Risk Management for Third-Party Cloud Services” drafting team, this is something that NERC drafting teams do all the time.

So here’s my idea: Why not create a new Standards Authorization Request (SAR) that just requires rewriting the two requirements? This would take CIP-007 R2 and CIP-010 R1 completely off the cloud SDT’s plate, meaning they might be able to finish their work in five years, not seven. And it would allow the two revised requirements to be drafted by a fresh team that’s excited about being able to fix the two biggest “problem children” among the NERC CIP requirements, rather than a team that’s midway through a 7-year slog and wondering if perhaps long-distance truck driving would have been a better career choice.

While I would technically be allowed to draft that SAR, I don’t have the time to do it – and more importantly, a SAR has much better chance of approval if it’s prepared by one or two NERC entities (with perhaps a vendor also participating). However, if a NERC entity wants to take the lead on this, I’d be pleased to help draft it.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

My book "Introduction to SBOM and VEX" is available! For context, see this post.