Thursday, July 31, 2025

Don’t worry about the CVE program – it’s in good hands. But the NVD? Not so much


Note from Tom: I’ve started a new blog on Substack called “Tom Alrich’s blog, too”. From now on, all my new posts will appear there; they will only occasionally appear in this blog. A subscription to the Substack blog costs $30 per year (or $5 per month); anyone who can’t pay that should email me. There is also a Founders subscription plan at $100 for the first year. I hope you’ll consider signing up for that, if you have benefited from my posts in the past.

I will put up all new posts for free in this blog until August 11. However, if you wish to continue to see my posts after August 11 – which I hope you will! – please sign up for a paid subscription in Substack at the link above.

My previous post discussed a new white paper called “Ensuring the Longevity of the CVE Program” by the Center for Cybersecurity Policy and Law. To say that I wasn’t overwhelmed by the insights provided by the authors is an understatement. However, the biggest problem with the paper is the fact that it left out the biggest threat to the future of the CVE Program. That threat lies with a different US government agency that has recently been having big problems, although of a quite different kind. First, I’ll provide some background information on the problems, and why they affect the CVE Program.

The CVE Program is run by the non-profit MITRE Corporation, which is contracted by the Department of Homeland Security. It is paid for - at least through next March - by the Cybersecurity and Infrastructure Information Agency (CISA), which is also part of DHS.

The other US government agency is NIST, the National Institute of Standards and Technology. NIST is part of the US Department of Commerce.

One of NIST’s many projects is the National Vulnerability Database (NVD), which started in 2005. A vulnerability database links software products with vulnerabilities that have been identified in those products. The NVD is currently by far the most widely used vulnerability database in the world; many private (VulnDB, VulDB, VulnCheck, Vulners, etc.) and public (Japan Vulnerability Notes, EU Vulnerability Database, etc.) vulnerability databases draw heavily from the NVD.

The NVD identifies vulnerabilities using CVE numbers (e.g., CVE-2025-12345); each vulnerability is described in a “CVE record”. Many people (including me, a few years ago) assume that, because the NVD uses CVE numbers to identify vulnerabilities, it must be the source of CVE records. In fact, CVE records originate with the CVE Program in DHS. They are created by CVE Numbering Authorities (CNAs), of which there are currently more than 450. The largest CNAs are software developers, including Microsoft, Oracle, Red Hat, HPE, and Schneider Electric.

When a CNA creates a new CVE record, they submit it to the CVE.org vulnerability database, which is run by the CVE Program (this is sometimes referred to as the “CVE list”, although it’s much more than a simple list). The NVD (and other vulnerability databases that are based on CVE) downloads new CVE records shortly after they appear in CVE.org.

When a CNA creates a new CVE record, they have the option of including various information in the record. Some fields are officially optional and others are mandatory, but, to be honest, there are only a few fields that are really mandatory, in the sense that the record will definitely be rejected if they’re not present (this includes the CVE number and the product name). The CVE Program maintains the CVE Record Format (formerly, the “CVE JSON Record Format”), which is now on version 5.1.1. The full spec for 5.1.1 is here, but this older version is more readable and reasonably up to date.

For our present purposes, the most important fields in the CVE record are:

1.      The CVE number that the CNA has assigned to this vulnerability, as well as a description of the vulnerability.

2.      The name(s) of the product(s) affected by the vulnerability. While the CNA must list at least one affected product, they can also list many of them, including separate versions of the same product. Of course, every product listed needs to be affected by the vulnerability described in the record.

3.      The vendor(s) of the product(s) affected by the vulnerability.

4.      The version or versions[i] affected by the vulnerability.

5.      The CPE name for each affected product.

The last item needs explanation. CPE stands for Common Platform Enumeration, although the name doesn’t carry much meaning today. What’s important is that CPE is a complicated machine-readable naming scheme for software and hardware products; the CPE name includes fields 2-4 above. If a CVE record doesn’t include a CPE name, it isn’t easily searchable in the NVD, since there is no way to know for certain that the product described in the text of a CVE record is the same product that is the basis for a similar CPE name.

For example, suppose items 2-4 above appear as “Product A”, “XYZ”, and “Version 2.74” respectively in the text of a CVE record. Furthermore, suppose that a user of Product A v2.74 wants to learn about vulnerabilities identified in that product. They find the CPE name for a similar product that includes the same values of fields 2 and 4, but it includes “XYZ, Inc.” instead of “XYZ” for the vendor name.

Are these in fact the same product? That depends on the application. If the vulnerable product were a throwaway product used in the insurance industry, the match might be considered perfect. On the other hand, if the vulnerable product was an electronic relay that could, if compromised, open a circuit breaker and black out a large section of Manhattan, this might not be considered a match at all.

In other words, due to the arbitrary nature of the fields included in CPE names, such as “vendor” and “product name” (both of which can vary substantially, even when the same product is being described), there will always be uncertainty in creating a CPE name. This means that two people could follow the CPE specification exactly, yet create different valid CPE names for a single software product. The NVD has reserved the right for their staff members to create CPE names for vulnerable products described in the text of new CVE records and add them to the records (a process called “enrichment”); however, there is simply no way to know for certain what values the staff member used for the fields in a CPE name they created.

This arbitrariness, along with other serious problems[ii], makes it close to impossible to fully automate the process of looking up software vulnerabilities in the NVD. In other words, someone searching the NVD for vulnerabilities that affect a particular product must guess the values for the fields used by the NVD staff member when they created the CPE name for that product. There is no way to be 100% certain that a product in the real world corresponds to a product described in a CVE record, unless they have identical CPE names.

But that isn’t the worst problem with CPE. The biggest is that, since February 2024, the NVD has drastically neglected their responsibility to create CPE names and add them to new CVE records. This has resulted in more than 50% of CVE records created since that date not including a CPE name(s) for the affected product(s) listed in the record.[iii]   

The problem with this is straightforward: a CVE record that doesn’t include a CVE name for the vulnerable product isn’t visible to an automated search, since CPE is currently the only machine-readable software identifier supported by the CVE program and the NVD.[iv] Without a CPE name, the user will have to search through the text in over 300,000 CVE records, although even then there is no such thing as a certain identification (remember “XYZ” vs. “XYZ, Inc.”?).

This is compounded by the fact that the NVD will provide the same message, “There are 0 matching records” when a product truly has no reported vulnerabilities, as when the product has a lot of reported vulnerabilities, but they don’t have CPE names included in them. Of course, human nature dictates that most people seeing that message will assume the former interpretation is correct, when it might well be the latter.

You may wonder why I’m pointing out the above as a serious problem for the CVE Program, when this is mostly the NVD’s fault (and they’re in a different department of the federal government). The problem is that, given the over 300,000 CVE records today - and the fact that new records are being added at an increasing rate (last year, 40,000 were added, vs. 28,800 in 2023) – it is impossible to perform truly automated vulnerability management. I define that as a single process that goes through an organization’s software inventory, looks up all those products in the NVD or another vulnerability database, and identifies all open vulnerabilities for those products (the next action would be remediation, or at least bugging the supplier to patch the vulnerabilities. This can’t be fully automated).

A vulnerability record without a machine-readable software identifier isn’t complete; it’s like giving somebody a car without a steering wheel. Until the CVE Program can ensure that every CVE record has a reliable identifier for any affected product described in the text of the record, they will receive an “Incomplete” record from me.

If you would like to comment on what you have read here, I would love to hear from you. Please comment below or email me at tom@tomalrich.com.


[i] While it would certainly be better to specify a version range in a CVE record than just enumerate affected versions, in fact version ranges are a very difficult problem, as I discussed in this post. It is fairly easy to specify a version range in a CVE record, but, unless the end user has a way of utilizing that range as part of an automated vulnerability management process in their environment, it’s useless to include it in the record in the first place.

[ii] Some of CPE’s problems are described in detail on pages 4-6 of this 2022 white paper on the software identification problem. It was written by the OWAS SBOM Forum, a group that I lead.

[iii] The NVD has somewhat improved their record for enrichment, but it seems a lot of their recent effort isn’t being well directed.

[iv] That will change when the CVE Program starts supporting the purl identifier, although the NVD might not support purl right away (other vulnerability databases probably will support it).

Wednesday, July 30, 2025

I’m moving to Substack!

I’ve just started a new blog on Substack called “Tom Alrich’s blog, too”. From now on, all my new posts will appear there; they will only occasionally appear in this blog (which is on the Blogspot platform). I decided that I can’t continue to produce new posts without either charging for access or including advertising, and I really don’t want to have advertising.

I will put up posts for free on both platforms until August 11, after which I will only put up all new posts on Substack – although I will occasionally put up new posts here. However, this blog (on Blogspot) will continue, since I don’t want to remove the 1200+ posts that I put up between January 2013 and today (although I intend to copy them all into Substack as well). As you may know, I link to previous posts very often. All those links would need to be changed if my previous posts were removed from Blogspot.

A subscription to the Substack blog costs $30 per year (or $5 per month); anyone who can’t pay that should email me. These are the minimum amounts I can charge on Substack; note that the entire subscription fee is passed on to me. There is also a Founders subscription plan at $100 for the first year. I hope you’ll consider signing up for that if you have appreciated my posts so far. Note that after August 11, people who have chosen the free signup option on Substack won’t be able to read my new posts, unless they upgrade to a paid subscription.

As you may know, Substack has become the premier blogging platform (not just for textual blogging like I do, but video and audio posts as well). It provides me good information on how my posts are being received, as well as other capabilities like a group chat for all subscribers (paid and free). I hope that will become a lively forum (there have been some lively discussions around my posts in LinkedIn, but not enough for my taste). The important feature of my Substack chat is that anybody will be able to post a question to the whole group; they won’t have to wait for a post that somehow touches on that question.

To make a long story short, if you wish to continue receiving my posts after August 11 – which I hope you will! – please sign up for a paid subscription in Substack at the link above.

Note: If you normally read my posts by clicking on the link I post in LinkedIn, you will still be able to do that. However, after August 11 you will only be able to read the occasional post that I put up on Blogspot, rather than all of my posts, which I will put up on Substack. Please sign up for a paid subscription in Substack.

If you would like to comment on what you have read here, I would love to hear from you. Please comment below or email me at tom@tomalrich.com. 

 

Tuesday, July 29, 2025

This is probably the worst idea I’ve heard regarding the future of CVE.


Note from Tom: I’ve just started a new blog on Substack called – get ready for this – “Tom Alrich’s blog, too” (I’m afraid I won’t win any award for creative blog naming). From now on, all my new posts will appear there; they will only occasionally appear in this blog. A subscription to the Substack blog costs $30 per year, although anyone who can’t pay that should email me. There is also a Founders subscription plan at $100 for the first year.

This blog will continue, mainly as the free repository for the 1200+ posts that I put up between January 2013 and two weeks from today; I’m giving two weeks for everybody reading this to subscribe to the Substack before that becomes the only source for my new posts. My most recent 50 (or so) posts are now on Substack as well, but there was a technical issue preventing my importing all 1200. I expect that will be resolved. However, I will leave all the 1200 posts in this blog, both because they were free originally and I don’t want to change that, and because I link to previous posts so frequently that it would be a nightmare to change the thousands of existing Blogspot links to Substack links.

Within the next two weeks, please go to the Substack link above and become a paid subscriber to the new blog, so you'll continue to be able to see my new posts (whether by email or by going to the Substack site itself). 

 

I’ve written a lot about the travails of the CVE Program, which for about 24 hours in April looked like it might disappear from the face of the earth. However, I’m pleased to report that the future of the Program looks bright – although there’s a significant cloud on the horizon that needs to be addressed (more on that in one of my next posts).

The reason why I’m optimistic about the future of the CVE Program is the CVE Foundation, which was in the process of being formed before the crisis in April, but is now on the way to becoming a solid nonprofit organization (it’s now led by my friend Pete Allor, former Director of Product Security for Red Hat and still an active member of the CVE Board. Pete has been active in the CVE Program since the CVE concept was introduced in 1999). I will elaborate on this point in another near-term post.

The CVE Foundation recently announced the publication of a white paper called “Ensuring the Longevity of the CVE Program” by the Center for Cybersecurity Policy and Law. While the paper has some good background information, I didn’t find it particularly new or inspiring – although I must admit I believe that no large marine mammals were harmed in writing the paper.

However, there was one quasi-suggestion in the paper that I think is quite dangerous. If it were followed (which it won’t, I’m sure), that could cause serious long-term damage to the CVE Program. On page 8, in a section suggesting possible funding sources, there’s a bullet point that reads, “Private sector - Vendors worldwide use CVE as the standardized form of vulnerability management, so the private sector should be considered as a funding source. Questions around vendor funding to gain leverage in modifying the CVE priority agenda would need to be addressed.” (emphasis mine)

The last sentence seems to say two things:

1.      Vendors will likely offer funding to the CVE Program on the condition that they “gain leverage” in helping the Program decide what should be its priorities in improving the program or the CVE Schema (which is at the heart of the CVE Program). I totally agree that vendors will request this. They wouldn’t be doing their jobs if they didn’t request it.

2.      The CVE Program should consider how they will respond to those requests. I have a suggestion for how the program should respond: It should say no. I can’t think of a better way for the program to damage its reputation than for it to even consider accommodating these requests.

The CVE Program is always considering new projects. I doubt there’s any serious user of CVE data that couldn’t rattle off ten things the program should do[i]. When somebody’s pet project is put off for a year or two (or even longer than that), they have reason to be unhappy. However, if it becomes known that priority in the project queue is a commodity that’s now available to the highest bidder, that will make a lot of people very unhappy. And justifiably so.

If you would like to comment on what you have read here, I would love to hear from you. Please comment below or email me at tom@tomalrich.com.


[i] For almost a year, I’ve been pushing one project: adding purl as a second possible software identifier in CVE records, besides just CPE. Fortunately, it looks like that may happen - or at least start to happen - by the end of the year.

Sunday, July 27, 2025

Do we still need to worry about a big cyberattack on the power grid?


Note from Tom 7/27: Kevin Perry, retired Chief CIP Auditor of the SPP Regional Entity and co-leader of the NERC Standards Drafting Team that drafted CIP versions 2 and 3, clarified my amateurish electrical engineering musings at three different points in this post. I have created a footnote for each of his observations.

Last Friday, the typically free-flowing meeting of a group I lead, the OWASP SBOM Forum, got onto the subject of grid cyberattacks. One of the members of the group put a link to a 2020 Wired article about the Aurora Generator Test at Idaho National Laboratory in 2007 in the chat.

I remember the uproar created by the Aurora test and how it changed the popular perception of the power grid. Previously, the grid had mostly been perceived as something that’s sturdy and stable, but not very interesting. After the Aurora test, it was increasingly considered to be something that’s quite interesting, but at the same time highly vulnerable to cyberattacks. People started thinking that grid attacks were such a big threat that it was almost inevitable that one would cause a huge outage that puts us back in the Stone Age.

The Aurora test was dramatic – after all, who doesn’t love seeing a large machine explode? However, the full story is a little more complicated than that. In fact, the Aurora test fell far short of demonstrating that the grid is highly vulnerable to cyberattacks; in fact, it’s far less vulnerable than almost any other part of our critical infrastructure.

Here’s some background: As you may know, the power grid is based on alternating current (AC). That means the voltage at any point in the grid (say, a certain point on a power line) varies between the minimum and maximum values a certain number of times per second. That number is approximately 60 Hertz in the US and 50 Hertz in Europe. This is referred to as the frequency of the grid.

Generators that run on fossil fuels (coal, natural gas, oil, etc.) produce AC power.[i] However, the generator can’t be connected to the grid if its frequency doesn’t closely match that of the grid, since it can be damaged if that happens. Very small deviations are usually acceptable, but even a frequency of 59 or 61 might be unacceptable.[ii]

This is why most generators are protected by a device called a protective relay. This is installed between the generator and the line that is connected to the grid. The relay senses the frequency of the generator and compares it to the grid’s frequency (which also varies, but normally by very little). If the difference exceeds some predetermined value, the relay commands a circuit breaker to open (disconnect) the line until the difference comes back within the tolerable range. The relay and breaker are normally installed in a switching yard outside of the generating facility.

The Aurora attack starts by causing the relay to open the line; when that happens, the generator speeds up (like when you engage the clutch in a moving car) and gets out of sync with the grid. Normally, the relay would prevent re-connection until the generator and the grid were synchronized again. However, the Aurora attack forces reconnection anyway. This results in a huge amount of torque being applied to the generator shaft, which causes physical damage; this whole cycle is repeated until the generator stops working due to the damage. It’s almost the equivalent of throwing a car driving on a highway into reverse without first coming to a stop.

The attack can be executed either by purely cyber means (in which case the attacker could be located remotely) or by a combination of physical and cyber means (in which case someone needs to be onsite to perform certain required physical actions, even if there is also a remote cyber attacker).

Of course, the INL test used entirely cyber means. However, as this excellent article by Schweitzer Engineering Labs (SEL)[iii] describes, a number of conditions need to be met before an attack using purely cyber means can succeed. For example, several protection measures in the relay were missing during the test, even though they would normally be expected to be in place (one, called “synchronism check on the tie breaker”, was in place on the relay previously but was disabled before the test).

In addition, as described on page 3 of the article, the test attack would only have succeeded in the real world if several obvious security breaches had occurred. For example, the data in a communications channel had to be left unencrypted (an unlikely occurrence today, although probably more likely in 2007) and the channel had to be breached by the attackers. Also, the attackers needed to know either one or two passwords controlling access to the protective relay settings. Finally, in real life the relay would have notified the SCADA operator – who can “see” all relays - of the change in access privileges to its settings, presumably leading to discovery of the attack.[iv]

In other words, the likelihood that a purely cyber attack based on Aurora would have succeeded in a real world situation is small, especially today, 18 years after the test. This is in part because of the publicity that resulted from that test; cyber security practices are much stronger in the power industry than they were then; in fact, the NERC CIP cybersecurity standards only came into effect starting in 2009 (the voluntary NERC standard that was in effect at the time of the test, called Urgent Action 1200 – didn’t apply to generation. Thus, the fact that the test was run and was widely publicized, even though it was flawed, undoubtedly resulted in increased grid security).

It’s possible that a physical Aurora attack (which would have to be conducted by someone positioned at the “tie breaker” in the switching yard) might have a better chance of succeeding, but that obviously requires the hackers to get into the switching yard. Switching yards and generating plants are usually under heavy security (although probably not if the generator is a small one like the 2.25 megawatt diesel generator used in the test at INL. Of course, a successful attack on a 2.25MW generator is unlikely to cause much disturbance in the power grid). Unless the attackers have managed to bribe an employee of the company that operates the generator being attacked to let one of them accompany the employee into the yard, it’s very unlikely they could ever be in a position to carry out the physical attack.[v]

Thus, I can safely say that nobody needs to stay up late at night worrying that the next morning their lights won’t work due to an Aurora attack on the generation facility that powers their neighborhood. In fact, any attack on a single generator - even a generator in a huge plant like the Grand Coulee Dam, the largest power source in North America - is unlikely to lead to anything more than a local outage of a couple of hours; it certainly won’t cause a cascading outage like the 2003 Northeast blackout. This is because there's all sorts of redundancy built into the grid, so that no single generation failure - or even two or three simultaneous failures - can have a serious impact, or even any impact at all.

So what’s an out-of-work grid attacker to do? If he wants to have a big impact, he needs to physically attack multiple strategic high voltage Transmission substations simultaneously – at least 7-8 of them, although more would be better. However, the attacker would have to know exactly which ones to attack (I’ve heard there are between 7 and 15 substations that would need to be attacked to have a serious impact on the grid. However, don’t expect me to publish a list of them, if I ever run across one).

Moreover, the attackers would need to simultaneously conduct a Metcalf-style physical attack on each substation, yet at the same time avoid the mistakes that the Metcalf attackers (who have never been identified, let alone caught) made. In fact, since the only good definition of “substation” that I know of (there is no NERC Glossary definition) is “a bunch of expensive equipment surrounded by a fence”[vi], this should show there’s no way to launch a purely cyber attack on a substation, since there’s no central piece of equipment like the generator in a generating plant, and the devices in the substation aren’t usually on a single network.

Most importantly, the attacker would first need to go back in time at least 5-6 years, before the CIP-014 standard for physical security of substations - which was developed in response to the Metcalf attack - came into effect. This is because CIP-014 is probably one of the most effective NERC standards ever developed.

In fact, after there were initial concerns that only a couple hundred substations would be declared in scope for CIP-014, well over 1,000 were declared (NERC says there are about 25,000 Bulk Electric System substations all told. BES substations are all either low or medium impact. Only a subset of the latter are in scope for CIP-014). The power industry was clearly worried that the Metcalf attack was just a test run for The Big One, so they invested a lot of money in CIP-014 compliance (including measures like ballistic barriers around substations).

In short, I think it’s close to impossible for a cyber or physical attack based on Aurora, or frankly any other cyber vulnerability, to succeed in causing an outage of any size, let alone a cascading outage. There has never been an outage of any size caused by a cyberattack in North America.

If you want to worry about a grid attack that would have a huge impact, I suggest you read about EMP attacks, in which a nuclear weapon is detonated about a mile above the US. Such an attack could conceivably fry most of the large transformers on which the grid depends – and which take a year or so to replace. However, a massive solar storm (or the explosion of a large meteor, like the one that did in the dinosaurs) could produce similar devastation.

In fact, the real cause for worry is any prolonged (say, more than two weeks) and widespread outage (say, over several states), no matter what its cause (like a hurricane more massive than Sandy). This could result in hundreds or thousands of people dying and civil order breaking down. The fact that there is such a miniscule likelihood that this could be caused by a cyber or physical attack doesn’t mean it’s a waste of time and money to harden the grid against such attacks. After all, risk = likelihood X impact. A small likelihood times an unimaginably large impact[vii] still yields a high risk.

My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or put up a paywall and charge a subscription fee, or both. However, I really don’t want to do either of these things. It would be great if everyone who appreciates my posts could donate a $20 (or more) “subscription fee” once a year. Will you do that today?

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Renewable power generating devices like wind turbines and solar panels produce direct current (DC). The DC current is changed to AC before being sent out over the power grid. The device that effects this transformation is an inverter. The Aurora vulnerability does not apply to such devices.

Kevin Perry pointed out that wind turbines and solar panels operate differently from an electrical point of view: “Solar panels produce DC power, which has to go through an inverter to be delivered to the grid.  However, the wind turbine, which is a rotating machine, is an induction motor that generates AC power.  But, the speed of the turbine, which varies with wind speed, affects frequency, so the generated power goes through a rectifier to convert it to DC and then an inverter to convert it back to AC at a stable frequency.  It is essentially the same way an uninterruptible power supply works.”

[ii] Kevin corrected what I said in this paragraph by saying, “The generator is not sync’d based on frequency.  It is sync’d based on the phase angle of the generator versus the grid.  If I recall, and I don’t profess to be an engineer, the phase angle, also called phase shift, is the time lag between voltage and current, whereas frequency is the number of cycles per second, or how many times the current changes direction per second.  The protection relay is normally set to prevent connection to the grid unless the phase angle is within a couple degrees of zero.  The relay was deliberately misconfigured by the hack to change the connection phase angle to 120 degrees, which causes the worst torque.”

[iii] SEL is the largest manufacturer of electronic relays worldwide; the relay used in the test may have been theirs. While I have always found SEL staff members to be honest and above board, the fact that the authors were SEL employees should be kept in mind when reading the article.

[iv] After pointing out that good security practices on the protective relay would have prevented the test from succeeding, the article continues with a discussion of recommended controls for the generator that also were not in place during the test.

[v] Regarding this paragraph, Kevin stated, “As you noted, an intruder can manually cause the damage by operating the breaker panel T handle in the substation control house, not at the breaker itself.  But, it doesn’t have to be the generator switchyard.  Any station along the generator lead line offers a connect/disconnect point.”

[vi] This is my definition, in case you hadn’t guessed.

[vii] Ted Koppel put out a good book about this problem, Lights Out, in 2016. It's supposedly about what would happen if a cyberattack caused a prolonged, widespread outage, but it's really about what would happen no matter what the cause of the outage; it’s frightening, but well researched and an easy read.

 

 

Monday, July 21, 2025

NERC CIP in the cloud: Is multi-tenancy a problem? If so, what should we do about it?

In this recent post, I noted that the “Project 2023-09 Risk Management for Third-Party Cloud Services” standards drafting team (SDT) has started drafting requirements for what I’m calling the “Cloud CIP” standards; this is the set of CIP standards, both new and revised, that will enable NERC entities with high and medium impact CIP environments to make full use of cloud computing services for their systems (BCS, EACMS, and PACS) that are in scope for CIP compliance.[i]

In the post, I described two types of requirements that are under discussion (or will be at some point). The first is requirements for controls that are probably included in ISO 27001 and FedRAMP today. It can be assumed that all major cloud service providers have an ISO 27001 certification and a FedRAMP authorization (which only applies directly to federal agencies but can still be taken as a general assessment of the CSP’s security controls). This includes controls that apply to both on-premises and cloud-based systems, such as patch management, configuration management, vulnerability management, etc. (the SDT is tentatively gathering new requirements in a draft standard called CIP-016, but this doesn’t mean there won’t be any other new standards).

I recommended in the previous post that the SDT go through the requirements of ISO 27001 and identify any that are worth including as new CIP requirements. This might include requirements that match some or all of the current CIP requirements, but they might go beyond those as well. Of course, like all CIP requirements, these new requirements will apply to NERC entities, not directly to the CSP(s). However, the CSPs will perform all of the activities required for compliance.

To evaluate their CSP’s compliance with new CIP requirements that are based on ISO 27001 requirements, the NERC entity will need to request the CSP’s audit report for ISO 27001, as well as whatever compliance documentation the CSP will provide for FedRAMP (the CSP should always be able to provide these items, as far as I know). If they discover a negative finding for any ISO 27001 or FedRAMP requirements that corresponds to a new CIP requirement, the NERC entity should inquire how the CSP is addressing, or has already addressed, that finding and track their progress in mitigating the finding.

However, the NERC entity should not do their own investigation of the CSP’s compliance status or even ask the CSP to fill out a questionnaire. Instead, they should content themselves with reviewing what the auditing organization (often called a Third Party Assessment Organization or 3PAO) included in the audit report. If the entity asks to do their own assessment, the CSP will almost certainly refuse to allow that – and rightfully so, in my opinion. The 3PAO probably brought in a small army of auditors and charged a lot of money for the audit; the last thing the CSP wants is to have 100 NERC CIP customers each demanding to do their own audit with slightly different interpretations of the requirements.

The second type of cloud requirements described in the previous post is requirements for controls that address risks that are mostly or entirely found in the cloud. I provided three examples of these controls in the post, but I want to focus now on the first of these: what I call the “multi-tenancy problem”. I have written about this problem in two posts, the more recent of which was a little more than a year ago.

This is a problem that only comes up when you have different organizations using a single software product (and more specifically, the database associated with that product); while that product might not necessarily be deployed in the cloud, this is almost always how the problem is encountered today. Of course, software deployed in the cloud is now commonly referred to as SaaS or software-as-a-service. In the post I just linked, I explained the problem this way (I received the author’s permission to make some minor changes to the wording):

(The problem is due to) the fact that software that was originally designed for on-premises use by a single organization is now available for use in the cloud by organizations of all types; these can be located all over the world. Because of the huge economies of scale that can be realized through moving to the cloud, many software developers are moving their on-premises systems there. In fact, in an increasing number of cases, the software is now, or soon will be, only available in the cloud.

When software originally written for on-premises use is made available in the cloud, a big question that often arises has to do with the customer database. It's safe to say that most SaaS applications store some customer data. When most software applications were used exclusively on premises, the database was almost always built on the assumption that it would be used either by a single organization or by a related group of organizations (e.g. the international subsidiaries of one company). It was assumed there would almost never be a case where a single database installed on the premises of one organization was used by organizations all over the world and in many different industries.

However, this is exactly what can happen, and is happening now, with many SaaS applications, since they’re often based on the on-premises version of the software. Even though a SaaS application is probably doing a good job of protecting each organization’s data from access by other organizations, the fact that there might be many different types of organizations, from potentially many different countries, utilizing the same database is enough to give some organizations the willies.

This is especially true for critical infrastructure (CI) organizations like electric utilities and independent power producers. When using shared services like SaaS, those CI organizations are always concerned that, if another organization using the same database doesn’t have good security, they can become the vector for attacks on organizations that do.

In the post, I went on to ask (again, with paraphrasing), “Is this really a problem? After all, databases and the applications that use them have a huge array of security controls at their disposal. In fact, users of an application usually have no direct access to the database, even though their data are stored there.”

Of course, I’m sure there are plenty of people reading this post that could argue quite convincingly that any multi-tenant database needs to be considered insecure unless proven otherwise. On the other hand, there are many other people (including me) who are willing to mostly concede their point about security, while at the same time asserting that if we prohibit multi-tenant SaaS databases, we will effectively prohibit most SaaS, period.

The fact is that most SaaS would be prohibitively expensive if the provider had to deploy a separate instance of the software for every customer. For example, if a standalone software product currently has 10,000 customers, think of how expensive it would be to deploy and maintain 10,000 single SaaS instances of that product. However, it’s not true that the only alternative to giving every customer their own instance is to house all 10,000 customers’ data in a single instance of the database. There are ways to group customers by country, region, industry, security controls, etc. This would lower the number of customers per instance, while at the same time decreasing the likelihood of one customer accessing another customer’s data.

Of course, I don’t think it should be a heavy lift for the Cloud CIP standards to require that organizations that must comply with one or more NERC CIP standards not share a SaaS database with entities, whether public or private, that are based in “bad actor” countries like North Korea, China, Iran, Myanmar, etc. But should we go beyond that requirement? Here are two other ideas:

2.      Require that organizations that must comply with one or more NERC CIP standards not share a SaaS database with organizations, public or private, that are not subject to mandatory cybersecurity regulations (and not just data privacy regulations).

3.      Require that NERC entities in the US, Canada and Mexico with high and/or medium impact CIP assets only share a SaaS database with other NERC entities with high and/or medium impact CIP assets.[ii]

Who can solve this problem, and how will they solve it? This needs to be addressed in the same way that similar cybersecurity problems without clear solutions have been addressed by previous NERC CIP SDTs: through back-and-forth discussion at the SDT meetings until a compromise is reached that (almost) everyone can live with. The result might be adoption of one of the three requirements just mentioned, but it might also be simply a decision not to address multi-tenancy in the CIP requirements at all.

However, the SDT needs to have a conversation about multi-tenancy, as well as other risks that apply only in cloud environments. These are the questions that need to be answered before many NERC entities will feel comfortable using the cloud for OT purposes.

My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or put up a paywall and charge a subscription fee, or both. However, I really don’t want to do either of these things. It would be great if everyone who appreciates my posts could donate a $20-$25 (or more) “subscription fee” once a year. Will you do that today?

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I also noted in that post that I think it’s a mistake for the SDT to start drafting requirements without having at least tentative definitions for the systems that the standards will apply to.

[ii] This idea came from Kevin Perry, retired Chief CIP Auditor of the SPP Regional Entity and co-leader of the drafting team that developed CIP versions 2 and 3. My post on multi-tenancy from 2024 included a quote from Kevin pointing out that many electric utilities already utilize the OASIS SaaS application for Transmission system reservations, where they share a single database with other OASIS users. This might be considered a hybrid of the second and third options.

Friday, July 18, 2025

The NVD can fix their problem if they want to

Bruce Lowenthal, Senior Director of Product Security for Oracle, has been following the ups and downs (mostly the latter) of the National Vulnerability Database (NVD) since February 2024. On the 12th of that month, the NVD without warning almost completely stopped creating CPE (“Common Platform Enumeration”) identifiers for vulnerable products that were identified in new CVE records. It’s no exaggeration to say that creating CPE names is one of the few most important things the NVD does.

CVE records are vulnerability reports prepared by CVE Numbering Authorities (CNAs). Oracle is one of the largest CNAs in terms of number of CVEs reported, so Bruce’s interest in the NVD and in the CVE program isn’t just academic (I briefly explained how CVE.org and the NVD work, as well as why this problem is so serious, in this post last December. A second post added to the first one, but isn’t essential to read).

Despite various NVD promises to have the problem fixed last year, the problem only got worse, not better. In fact, at the end of December it seemed like the NVD might be about to literally give up creating new CPE names. By March, that outcome seemed, if anything, to be more likely.

However, a little more than two weeks ago, I asked Bruce for an NVD update, and he painted a different picture: In the last few months, the NVD has picked up its pace of adding CPE names to CVE records that don’t now have them; that’s the good news. However, the bad news is that they’re wasting most of their efforts by creating CPEs for CVE records that are more than three months old.

The big problem with this practice is that most suppliers patch new vulnerabilities within two or three months. This means the CVE record is usually out of date when NIST adds the CPE name to it; the CVE can be discovered by a search using the product’s CPE name, but the product is no longer affected by the CVE – as long as the user has applied the patch the supplier provided.

Yesterday, Bruce emailed me an update: the good news is better and the bad news is worse. That is, the NVD seems to be “enriching” (i.e., adding a CPE name to) more CVE records than at any time since February 2024; but they’re still concentrating most of their effort on vulnerabilities that are likely to be patched already, vs. ones that aren’t. Why are they doing this?

He sent me the table shown below, which lists, for every month since March of 2024, the percentage of CVE records that have at least one CPE name assigned to them (no matter when the CPE was assigned). Note that:

1.      Bruce says that in the past three weeks, NIST has assigned a CPE name to at least one CVE record published in each of the months in the table. So, despite his advice to stop updating older CVE records altogether and just focus on the most recent records, the NVD seems to want to treat all records equally, no matter when they were created.

2.      For the most recent four months (including this month, July), an average of only 36% of the new CVE records published in that month have been assigned a CPE name. On the other hand, the average for the four months starting in June of 2024 is 78%. Obviously, the NVD could have made Bruce (and a lot of his peers) happier by concentrating on recently-identified vulnerabilities, not “oldies but (not-so-)goodies”.

3.      Bruce conducted a good thought experiment. He asked, “What if, starting today, the NVD focused all of its efforts on adding CPE names to CVE records that have been created in the past six weeks?” (Remember, before February 2024, the NVD was normally adding CPE names to CVE records that had been created within the past week). He says that, by the end of August and with no increase in resources (which isn’t likely to occur anyway), the monthly percentages of new CVE records with CPE names in the table below for June, July and August would be 95% during each of those three months. Of course, were this to be done, searches of the NVD would be much more likely to identify recently created CVEs than they are today.

Bruce concluded his email by saying, “This data is really interesting. It suggests that NVD can provide an acceptable (level of) service with their current resources by just changing their priorities!”  However, he added, “But the current approach probably means they will never catch up unless they get more resources or become more efficient.”

Unfortunately, in today’s Washington, the likelihood of getting more resources is small. And what’s the likelihood that the NVD will become more efficient? Given their performance over the past year and a half, I certainly wouldn’t bet the farm on it.

 

CPE Assignment by Month of 2024 starting March

Month
Starting

Total
CVEs

With
CPE

Percent

2025-07-01

2,245

650

29%

2025-06-01

3,358

1,464

44%

2025-05-01

3,759

1,811

48%

2025-04-01

4,062

1,461

36%

2025-03-01

3,952

1,815

46%

2025-02-01

2,960

1,397

47%

2025-01-01

4,150

1,732

42%

2024-12-01

3,025

1,482

49%

2024-11-01

3,631

2,206

61%

2024-10-01

3,378

2,375

70%

2024-09-01

2,420

2,039

84%

2024-08-01

2,708

2,247

83%

2024-07-01

2,894

2,091

72%

2024-06-01

2,752

2,004

73%

2024-05-01

3,350

1,900

57%

2024-04-01

3,239

1,953

60%

2024-03-02

2,549

1,796

70%


My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or put up a paywall and charge a subscription fee, or both. However, I really don’t want to do either of these things. It would be great if everyone who appreciates my posts could donate a $20-$25 (or more) “subscription fee” once a year. Will you do that today?

 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.