Monday, July 21, 2025

NERC CIP in the cloud: Is multi-tenancy a problem? If so, what should we do about it?

In this recent post, I noted that the “Project 2023-09 Risk Management for Third-Party Cloud Services” standards drafting team (SDT) has started drafting requirements for what I’m calling the “Cloud CIP” standards; this is the set of CIP standards, both new and revised, that will enable NERC entities with high and medium impact CIP environments to make full use of cloud computing services for their systems (BCS, EACMS, and PACS) that are in scope for CIP compliance.[i]

In the post, I described two types of requirements that are under discussion (or will be at some point). The first is requirements for controls that are probably included in ISO 27001 and FedRAMP today. It can be assumed that all major cloud service providers have an ISO 27001 certification and a FedRAMP authorization (which only applies directly to federal agencies but can still be taken as a general assessment of the CSP’s security controls). This includes controls that apply to both on-premises and cloud-based systems, such as patch management, configuration management, vulnerability management, etc. (the SDT is tentatively gathering new requirements in a draft standard called CIP-016, but this doesn’t mean there won’t be any other new standards).

I recommended in the previous post that the SDT go through the requirements of ISO 27001 and identify any that are worth including as new CIP requirements. This might include requirements that match some or all of the current CIP requirements, but they might go beyond those as well. Of course, like all CIP requirements, these new requirements will apply to NERC entities, not directly to the CSP(s). However, the CSPs will perform all of the activities required for compliance.

To evaluate their CSP’s compliance with new CIP requirements that are based on ISO 27001 requirements, the NERC entity will need to request the CSP’s audit report for ISO 27001, as well as whatever compliance documentation the CSP will provide for FedRAMP (the CSP should always be able to provide these items, as far as I know). If they discover a negative finding for any ISO 27001 or FedRAMP requirements that corresponds to a new CIP requirement, the NERC entity should inquire how the CSP is addressing, or has already addressed, that finding and track their progress in mitigating the finding.

However, the NERC entity should not do their own investigation of the CSP’s compliance status or even ask the CSP to fill out a questionnaire. Instead, they should content themselves with reviewing what the auditing organization (often called a Third Party Assessment Organization or 3PAO) included in the audit report. If the entity asks to do their own assessment, the CSP will almost certainly refuse to allow that – and rightfully so, in my opinion. The 3PAO probably brought in a small army of auditors and charged a lot of money for the audit; the last thing the CSP wants is to have 100 NERC CIP customers each demanding to do their own audit with slightly different interpretations of the requirements.

The second type of cloud requirements described in the previous post is requirements for controls that address risks that are mostly or entirely found in the cloud. I provided three examples of these controls in the post, but I want to focus now on the first of these: what I call the “multi-tenancy problem”. I have written about this problem in two posts, the more recent of which was a little more than a year ago.

This is a problem that only comes up when you have different organizations using a single software product (and more specifically, the database associated with that product); while that product might not necessarily be deployed in the cloud, this is almost always how the problem is encountered today. Of course, software deployed in the cloud is now commonly referred to as SaaS or software-as-a-service. In the post I just linked, I explained the problem this way (I received the author’s permission to make some minor changes to the wording):

(The problem is due to) the fact that software that was originally designed for on-premises use by a single organization is now available for use in the cloud by organizations of all types; these can be located all over the world. Because of the huge economies of scale that can be realized through moving to the cloud, many software developers are moving their on-premises systems there. In fact, in an increasing number of cases, the software is now, or soon will be, only available in the cloud.

When software originally written for on-premises use is made available in the cloud, a big question that often arises has to do with the customer database. It's safe to say that most SaaS applications store some customer data. When most software applications were used exclusively on premises, the database was almost always built on the assumption that it would be used either by a single organization or by a related group of organizations (e.g. the international subsidiaries of one company). It was assumed there would almost never be a case where a single database installed on the premises of one organization was used by organizations all over the world and in many different industries.

However, this is exactly what can happen, and is happening now, with many SaaS applications, since they’re often based on the on-premises version of the software. Even though a SaaS application is probably doing a good job of protecting each organization’s data from access by other organizations, the fact that there might be many different types of organizations, from potentially many different countries, utilizing the same database is enough to give some organizations the willies.

This is especially true for critical infrastructure (CI) organizations like electric utilities and independent power producers. When using shared services like SaaS, those CI organizations are always concerned that, if another organization using the same database doesn’t have good security, they can become the vector for attacks on organizations that do.

In the post, I went on to ask (again, with paraphrasing), “Is this really a problem? After all, databases and the applications that use them have a huge array of security controls at their disposal. In fact, users of an application usually have no direct access to the database, even though their data are stored there.”

Of course, I’m sure there are plenty of people reading this post that could argue quite convincingly that any multi-tenant database needs to be considered insecure unless proven otherwise. On the other hand, there are many other people (including me) who are willing to mostly concede their point about security, while at the same time asserting that if we prohibit multi-tenant SaaS databases, we will effectively prohibit most SaaS, period.

The fact is that most SaaS would be prohibitively expensive if the provider had to deploy a separate instance of the software for every customer. For example, if a standalone software product currently has 10,000 customers, think of how expensive it would be to deploy and maintain 10,000 single SaaS instances of that product. However, it’s not true that the only alternative to giving every customer their own instance is to house all 10,000 customers’ data in a single instance of the database. There are ways to group customers by country, region, industry, security controls, etc. This would lower the number of customers per instance, while at the same time decreasing the likelihood of one customer accessing another customer’s data.

Of course, I don’t think it should be a heavy lift for the Cloud CIP standards to require that organizations that must comply with one or more NERC CIP standards not share a SaaS database with entities, whether public or private, that are based in “bad actor” countries like North Korea, China, Iran, Myanmar, etc. But should we go beyond that requirement? Here are two other ideas:

2.      Require that organizations that must comply with one or more NERC CIP standards not share a SaaS database with organizations, public or private, that are not subject to mandatory cybersecurity regulations (and not just data privacy regulations).

3.      Require that NERC entities in the US, Canada and Mexico with high and/or medium impact CIP assets only share a SaaS database with other NERC entities with high and/or medium impact CIP assets.[ii]

Who can solve this problem, and how will they solve it? This needs to be addressed in the same way that similar cybersecurity problems without clear solutions have been addressed by previous NERC CIP SDTs: through back-and-forth discussion at the SDT meetings until a compromise is reached that (almost) everyone can live with. The result might be adoption of one of the three requirements just mentioned, but it might also be simply a decision not to address multi-tenancy in the CIP requirements at all.

However, the SDT needs to have a conversation about multi-tenancy, as well as other risks that apply only in cloud environments. These are the questions that need to be answered before many NERC entities will feel comfortable using the cloud for OT purposes.

My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or put up a paywall and charge a subscription fee, or both. However, I really don’t want to do either of these things. It would be great if everyone who appreciates my posts could donate a $20-$25 (or more) “subscription fee” once a year. Will you do that today?

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I also noted in that post that I think it’s a mistake for the SDT to start drafting requirements without having at least tentative definitions for the systems that the standards will apply to.

[ii] This idea came from Kevin Perry, retired Chief CIP Auditor of the SPP Regional Entity and co-leader of the drafting team that developed CIP versions 2 and 3. My post on multi-tenancy from 2024 included a quote from Kevin pointing out that many electric utilities already utilize the OASIS SaaS application for Transmission system reservations, where they share a single database with other OASIS users. This might be considered a hybrid of the second and third options.

Friday, July 18, 2025

The NVD can fix their problem if they want to

Bruce Lowenthal, Senior Director of Product Security for Oracle, has been following the ups and downs (mostly the latter) of the National Vulnerability Database (NVD) since February 2024. On the 12th of that month, the NVD without warning almost completely stopped creating CPE (“Common Platform Enumeration”) identifiers for vulnerable products that were identified in new CVE records. It’s no exaggeration to say that creating CPE names is one of the few most important things the NVD does.

CVE records are vulnerability reports prepared by CVE Numbering Authorities (CNAs). Oracle is one of the largest CNAs in terms of number of CVEs reported, so Bruce’s interest in the NVD and in the CVE program isn’t just academic (I briefly explained how CVE.org and the NVD work, as well as why this problem is so serious, in this post last December. A second post added to the first one, but isn’t essential to read).

Despite various NVD promises to have the problem fixed last year, the problem only got worse, not better. In fact, at the end of December it seemed like the NVD might be about to literally give up creating new CPE names. By March, that outcome seemed, if anything, to be more likely.

However, a little more than two weeks ago, I asked Bruce for an NVD update, and he painted a different picture: In the last few months, the NVD has picked up its pace of adding CPE names to CVE records that don’t now have them; that’s the good news. However, the bad news is that they’re wasting most of their efforts by creating CPEs for CVE records that are more than three months old.

The big problem with this practice is that most suppliers patch new vulnerabilities within two or three months. This means the CVE record is usually out of date when NIST adds the CPE name to it; the CVE can be discovered by a search using the product’s CPE name, but the product is no longer affected by the CVE – as long as the user has applied the patch the supplier provided.

Yesterday, Bruce emailed me an update: the good news is better and the bad news is worse. That is, the NVD seems to be “enriching” (i.e., adding a CPE name to) more CVE records than at any time since February 2024; but they’re still concentrating most of their effort on vulnerabilities that are likely to be patched already, vs. ones that aren’t. Why are they doing this?

He sent me the table shown below, which lists, for every month since March of 2024, the percentage of CVE records that have at least one CPE name assigned to them (no matter when the CPE was assigned). Note that:

1.      Bruce says that in the past three weeks, NIST has assigned a CPE name to at least one CVE record published in each of the months in the table. So, despite his advice to stop updating older CVE records altogether and just focus on the most recent records, the NVD seems to want to treat all records equally, no matter when they were created.

2.      For the most recent four months (including this month, July), an average of only 36% of the new CVE records published in that month have been assigned a CPE name. On the other hand, the average for the four months starting in June of 2024 is 78%. Obviously, the NVD could have made Bruce (and a lot of his peers) happier by concentrating on recently-identified vulnerabilities, not “oldies but (not-so-)goodies”.

3.      Bruce conducted a good thought experiment. He asked, “What if, starting today, the NVD focused all of its efforts on adding CPE names to CVE records that have been created in the past six weeks?” (Remember, before February 2024, the NVD was normally adding CPE names to CVE records that had been created within the past week). He says that, by the end of August and with no increase in resources (which isn’t likely to occur anyway), the monthly percentages of new CVE records with CPE names in the table below for June, July and August would be 95% during each of those three months. Of course, were this to be done, searches of the NVD would be much more likely to identify recently created CVEs than they are today.

Bruce concluded his email by saying, “This data is really interesting. It suggests that NVD can provide an acceptable (level of) service with their current resources by just changing their priorities!”  However, he added, “But the current approach probably means they will never catch up unless they get more resources or become more efficient.”

Unfortunately, in today’s Washington, the likelihood of getting more resources is small. And what’s the likelihood that the NVD will become more efficient? Given their performance over the past year and a half, I certainly wouldn’t bet the farm on it.

 

CPE Assignment by Month of 2024 starting March

Month
Starting

Total
CVEs

With
CPE

Percent

2025-07-01

2,245

650

29%

2025-06-01

3,358

1,464

44%

2025-05-01

3,759

1,811

48%

2025-04-01

4,062

1,461

36%

2025-03-01

3,952

1,815

46%

2025-02-01

2,960

1,397

47%

2025-01-01

4,150

1,732

42%

2024-12-01

3,025

1,482

49%

2024-11-01

3,631

2,206

61%

2024-10-01

3,378

2,375

70%

2024-09-01

2,420

2,039

84%

2024-08-01

2,708

2,247

83%

2024-07-01

2,894

2,091

72%

2024-06-01

2,752

2,004

73%

2024-05-01

3,350

1,900

57%

2024-04-01

3,239

1,953

60%

2024-03-02

2,549

1,796

70%


My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or put up a paywall and charge a subscription fee, or both. However, I really don’t want to do either of these things. It would be great if everyone who appreciates my posts could donate a $20-$25 (or more) “subscription fee” once a year. Will you do that today?

 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, July 16, 2025

NERC CIP: When you move your BCS to the cloud, don’t expect a lot of love from the CSP


The NERC “Project 2023-09 Risk Management for Third-Party Cloud Services” Standards Drafting Team has been meeting for more than a year. They spent the first six months revising their Standards Authorization Request (SAR), which is essentially the “charter” under which the SDT operates. They are now discussing new requirements applicable to cloud use by NERC entities.

I think it’s a little early for them to be doing this, since there are still some important issues they need to decide before they can make real progress on requirements. One of the most important of those issues is the question of what evidence a cloud service provider (CSP) will be able to provide to a NERC entity to show that the CSP is in compliance with a CIP requirement or requirement part. This isn’t a trivial question. In fact, this is the question that has prevented almost all cloud use for OT purposes by NERC entities since cloud computing ceased to be considered an experimental technology 10-15 years ago.

The biggest problem with the current CIP requirements, as far as cloud computing is concerned, is that there are multiple requirements – e.g., CIP-005 R1 Electronic Security Perimeter, CIP-007 R2 Patch Management, and CIP-010 R1 Configuration Management – that apply to individual devices (primarily BES Cyber Assets). If a NERC entity deploys high or medium impact BES Cyber Systems in the cloud under today’s requirements, the platform CSP will have to provide the entity with evidence of compliance with the wording of all applicable requirements and requirement parts, including these.

Since systems deployed in the cloud continually flit from device to device and data center to data center, no platform CSP will be able to provide compliance evidence for any of the above requirements. Doing this would require evidence that every device on which any part of the system was deployed, even for a brief period, was compliant with every one of the applicable requirements and requirement parts during the entire audit period (usually three years, although the Cloud CIP requirements might have a shorter period). Moreover, even if a CSP might be able to provide that evidence, without a doubt they will refuse to do so, since compiling it would require a huge effort. Thus, no “Cloud CIP” requirement should apply to individual devices.

There are other CIP requirements, such as most of the requirements in standards CIP-004, CIP-006, CIP-008, CIP-009, CIP-012, and CIP-013, that don’t require compliance on the individual device level; in theory, the platform CSP should be able to provide compliance evidence for these. For example, CIP-004-7 Requirement R4 Part 4.2 requires the NERC entity to “Verify at least once each calendar quarter that individuals with active electronic access or unescorted physical access have authorization records.”

Of course, it’s likely that CSPs always have information available that shows which staff members have been working on which devices at which times within the last 90 days. However, it’s not likely that CSPs can continually track all staff members that have worked on a given NERC entity’s systems during any given calendar quarter – and even if they can, they almost certainly will refuse to do that. This is especially true since the evidence would have to be tailored for each CIP customer individually. Thus, even though this second group of CIP requirements doesn’t apply on an individual device level, the CSP’s are very unlikely to be willing to provide the required evidence – even though it might in principle be possible to do so.

In other words, while the electric power industry is used to having some service provider organizations, like SaaS providers, bend over backwards to provide them with non-standard services to build their customer foothold in the industry, it would be a big mistake to assume that the large platform CSPs will follow that pattern. Those CSPs have built their whole business model on the idea that they perform certain standard services for huge numbers of customers very efficiently and cost-effectively. They’re not going to jeopardize that model, just to have the privilege of bragging that they serve a critical – but actually quite small – industry.

It is important to keep in mind that, even though the power industry is already a big user of cloud services, those are mostly services for the IT side of the industry. Even though the CSPs would love to have more business from the OT side of the industry as well, they know quite well that breaking their low-cost business model is the wrong way to get that business. In contrast with SaaS providers, which have a different business model and are much more focused on serving particular industries like power, nobody should expect platform CSPs to go out of their way to help NERC entities comply with NERC CIP requirements that require a customized response for each entity.

This presents a problem: On one hand, there need to be cybersecurity requirements for CSPs (although they need to be enforced through their NERC entity customers, since neither NERC nor FERC has the authority to directly regulate suppliers to the power industry). Neither the public nor the power industry will consider it safe if NERC entities are allowed to deploy workloads in the cloud without any regulation whatsoever, when the same workloads would be subject to compliance with the CIP standards if deployed onsite. For example, there will need to be requirements for configuration management, patch management, ports and services management, etc.

On the other hand, I’ve just pointed out that CSPs aren’t going to agree to provide compliance evidence that needs to be tailored to individual NERC entities; yet the great majority of CIP requirements and requirement parts mandate exactly that.

However, the power industry isn’t the first industry to face this problem, with respect to use of the cloud: This is why comprehensive certifications like ISO27001 exist. The certification audit is undoubtedly arduous, but it only needs to be done once a year. The CSP’s customers have (usually implicitly) agreed to accept the audit findings and not to demand a separate assessment against the customer’s preferred set of requirements – which would obviously be much more expensive for the CSP.

Two sets of controls – first set

I suggest that the Standards Drafting Team identify two sets of controls they want a CSP audited to be audited on. One set is controls that are likely to be addressed in ISO 27001 (patch management, vulnerability management, configuration management, etc.); these are usually controls that apply to both on-premises and cloud environments. The second set is controls that only apply in cloud environments.

To identify the first set of controls, I suggest that the drafting team go through the certification and identify the controls it thinks are most important for NERC entities; these will become the “requirements” that the CSP must comply with. The evidence required for each control will be any findings identified in the audit report for any of the controls in the drafting team’s list. Presumably, the CSP has already addressed any findings from the audit; however, if they haven’t, a registered entity customer (or NERC itself) is justified in harassing the CSP about open audit findings.

Both NERC entities and NERC auditors need to keep in mind that the purpose of reviewing a CSP’s audit report isn’t to decide whether to use the CSP at all. Unless the CSP has some particularly serious finding in the certification audit report, they will almost certainly continue to be the NERC entity’s cloud provider. However, individual audit findings should always be taken as action items by the NERC entity. The entity (or perhaps NERC itself) needs to follow up with the CSP to find out when they will fix any findings; they also need to keep following up until they are fixed. If the entity doesn’t bother to follow up with the CSP about a finding, IMO they should be assessed an NPV (notice of potential violation) at their next CIP audit.

The second set of controls

The second set consists of controls that only apply in cloud environments, and thus aren’t likely to be addressed in certifications like ISO 27001. While there are now several certifications available for cloud environments[i], I recommend that the drafting team take an eclectic approach and identify controls whose non-observance has led to serious breaches, controls taught in cloud security classes like SANS, etc. The point is to identify controls that might be especially important for NERC entities.

One such control is partial segregation of SaaS customer accounts to address the multi-tenancy problem; I discussed that problem in this 2024 post, but I plan to discuss it again soon. Other controls are inadequate cloud customer training on required security procedures (which probably led to the Capital One breach in 2019), as well as a CSP’s improper vetting of third party cloud access providers, which may have led to the SolarWinds breach.

I suggest that the SDT compile a list of controls that they think should be required of CSPs. In fact, it would probably be a good idea to have a one-day video conference on cloud risks that could be addressed in the Cloud CIP standards. It would be open to everyone, but especially NERC entities, vendors and ERO staff members.

How will this second set of controls be audited? I think NERC (or some third party they designate) will need to audit them. My reasons for saying that are in this post.

My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or put up a paywall and charge a subscription fee, or both. However, I really don’t want to do either of these things. It would be great if everyone who appreciates my posts could donate a $20-$25 (or more) “subscription fee” once a year. Will you do that today?

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] FedRAMP is often cited as a certification, but it is in fact an authorization for federal agencies to use a particular cloud service or cloud provider.

Monday, July 14, 2025

Will version ranges ever work?

 

I’ve come to realize that version ranges are perhaps the biggest problem in vulnerability management. Here’s why I say that:

1.      Vulnerabilities usually are found in a range of versions of a software product, not just in one version or a small number of separate versions. This is because a coding error that leads to a vulnerability often won’t be found until many versions later. In other words, if the vulnerability was introduced in version 1.2 and was discovered and fixed in version 1.8, all versions (including patches) between 1.1 and 1.8 are likely to be vulnerable.

2.      When a software developer (either a commercial developer or an open source project) notifies their users of a new vulnerability that applies to a range of versions, they will normally describe the range in regular English (or whatever the vernacular language is), e.g. “Versions 1.2 through 1.7 are vulnerable.”

3.      However, when the developer creates a CVE record to report the vulnerability (if they are a CVE Numbering Authority or CNA) or works with a separate CNA to report it, they seldom include a version range in the CPE name, even though they may have described the range in the text of the CVE record. And if a NIST employee or contractor adds a CPE name to the record, they will seldom include a version range in the CPE, even though the CNA described the range in the text of the CVE record.

In other words, while many CVE records include a textual description of a range of affected versions for one or more products, the range seldom is reflected in the machine-readable CPE name included in the record (if one is included, of course. Today, this is true for fewer than half of new CVE records). If an end user organization’s vulnerability management tool reads such a record, it won’t normally flag the range as vulnerable, unless the record contains a separate CPE name for each affected version.

Why aren’t CPE version ranges being included in new CVE records? Even though the CPE spec is complex, I doubt that is the main reason. My guess is the reason has much more to do with the fact that few end users have tools that can make intelligent use of a machine-readable version range, so the users aren’t asking for them.

The question now becomes, “Given that probably most software vulnerabilities occur in a range of versions, why aren’t there many end user tools that can make intelligent use of machine-readable version ranges?” To answer that question, we need to look at end user use cases. There are two main types of these.

The first use case

The first use case answers the question, “Given a version range, does version X fall inside or outside that range?” Depending on the versioning scheme used by the supplier of the software (i.e., the rules followed to create a label for a version, such as 2.2.3), this will be an easy or difficult question to answer. There are two main types of versioning schemes, one mostly followed by open source software communities and the other mostly followed by commercial developers.

The first type of versioning scheme is one that is completely numerical and follows simple arithmetical rules. For example, there is a versioning scheme in which the version is represented as X.Y, where X refers to the major version and Y refers to the minor version. Thus, version 2.3 means major version 2 and minor version 3. If a minor version change were introduced next, the version number would be 2.4. If a major version were introduced next, the new number would be 3.0, since the minor version number reverts to zero with a new major version.

One popular all-numeric scheme is semantic versioning. This follows the model X.Y.Z, where X is the major version, Y the minor version, and Z the patch version. The rules for semantic versioning are not much more complex than those for the major/minor scheme just described.

The purl community has developed a “mostly universal” means of specifying a version range, using almost any all-numerical versioning scheme. Since an identifier for the versioning scheme is part of the range specification, a user tool that supports VERS will be able to ingest a version string from one of the supported versioning schemes and respond whether that version is within a range that was specified using VERS. Thus, it is accurate to state that question 1 can be answered for almost any all-numerical version range.

However, commercial developers often utilize complex versioning schemes that are not all-numerical and/or do not follow the simple rules that the all-numerical schemes follow. Two ways in which complex schemes differ from all-numerical schemes are:

a.      They might include letters as fields, not just numbers; and

b.      The order in which fields are incremented isn’t self-evident. If, for example, the fields are always incremented moving from left to right, the incremented version will be very different than if the fields are incremented from right to left, or even according to some other plan.

For example, the latest version of Cisco IOS is “15.9(3)M11”. Suppose someone tells you that IOS versions 15.5(1)M8 through 15.9(3)M11 are affected by a serious new vulnerability; you want to know whether the version you’re using, 15.5(5)M9, falls within that range. You won’t be able to answer the question until you have been given three pieces of information:

1.      Is ‘M’ a field or an unchanging part of the specification?

2.      If M is a field, is it related in any way to one of the numerical fields? For example, is “M11” a unit that could be replaced with another letter/number combination like “N14”? Or does the number vary but not the letter?

3.      In what order are the fields incremented? If the fields are incremented moving from left to right (so that 15.9 is incremented first), the incremented version string will be very different than it would be if the fields were incremented moving in the opposite direction (so that 11 was incremented first). It’s also possible that the integer within the parentheses is incremented first, which of course will yield a very different incremented version string.

In this post in April, I introduced a term that a large software developer had introduced to me: “ordering rule”. This is a rule (rather, a set of rules) that describes how the versions in a complex versioning scheme like the one behind IOS are ordered, beyond the simple rule that an integer n is followed by n+1 (which is the basic rule behind the all-numeric schemes). Since commercial software suppliers, especially large ones, often don’t follow simple ordering rules, this means that a tool vendor – on either the developer or consumer side – will need to have an ordering rule for each commercial supplier whose products they support.

Not only that, but there will need to be a standard notation for documenting an ordering rule, plus a “rule interpreter” that can be incorporated into a tool and will interpret each ordering rule for the tool. Thus, a tool for vulnerability or asset management (and other tools as well) would be able to ingest and utilize version ranges based on versioning schemes for many commercial suppliers, if it had previously ingested an ordering rule for each of those suppliers.

Of course, the standard notation for documenting ordering rules, and the code for the rule interpreter, will need to be developed by some organization, preferably a software security nonprofit like OWASP or OASIS Open. An individual software developer might also take this on if they agree to make all the code developed available to the general public.

I don’t think anything like ordering rules or rule interpreters exists today, but if you know differently, please email me.

The second use case

The second use case answers the question, “Given the versioning scheme used by the supplier of product XYZ as well as the supplier’s ordering rule, what are the versions that fall within this range?” One example I can think of for why this question might need to be answered is the case in which a serious new vulnerability has been discovered that applies to a range of versions of a commercial product; the supplier needs to know every version that falls within the range so they can patch each one. This includes major versions, minor versions, patched versions, new builds, etc. My guess is that a lot of commercial suppliers would find it very difficult to answer this question for at least some of their products.

If I were on a product security team and I had to answer this question, my best hope would be to find that somebody has been keeping meticulous records of new versions all along. That is, they have maintained a single list that includes every version within the range, in an order that strictly follows the ordering rule. However, in a lot of organizations this is too much to hope for. Not only will there not be a comprehensive list of versions available, but it is quite possible that nobody will even be able to state how many versions fall within the range.

However, that’s not the biggest problem. The biggest problem is that I not only have to identify versions that fall with the range, but I must also be able to demonstrate that my list is complete – i.e., there are no other versions that fall within the range.

The best way I can think of to create a provably complete list of versions within a range is to take the following steps. I’m breaking this problem into two sub-cases: 2a) The product follows the Semver versioning scheme, for which the rules are well defined in the Semver specification; and 2b) The product doesn’t follow an all-numeric versioning scheme, but the product’s supplier has documented an ordering rule for the product that makes it clear, for every version string, what the next version string will be[i].

In both sub-cases, the goal is to start with the first version in the range and then run the ordering rule to predict what the next version will be in each possible scenario. Here is an example using sub-case 2a and the simple “X.Y” versioning scheme described earlier:

1.      Start with the first version in the range. If that is v2.3, the next version will be either 2.4 (if it is a minor version) or 3.0 (if it is a major version). In some versioning schemes like semver, there will be three or more possible next versions.

2.      Determine whether v2.4 or v3.0 was released (if they were both released, somebody made a mistake, unless the product was deliberately “forked”. And if neither was released but subsequent versions were released, somebody made a different mistake).

3.      Start over at the first step, this time using whichever subsequent version(s) was released. Continue doing this, while maintaining a list of each released version that has been “discovered”.

4.      When you reach the last version in the range, stop.

Of course, sub-case 2b will be more complex than the above, since the ordering rule will be complex. However, the process will not differ in principle from the one just described. In both sub-cases, following the process will generate a list of versions that are vulnerable because they fall within the vulnerable version range.

Lessons learned

There are probably other end use cases for version ranges, besides the two I’ve described. It would be nice if there were a single algorithm that could address all possible end use cases. However, the fact that the algorithms for the one use case and two sub-cases just described are so different points to the fact that there is probably no single algorithm.

What this means is that developers of end user tools for vulnerability management, asset management, software testing, etc. will need to take most of the responsibility for developing the algorithms to make use of the version ranges provided by software developers, vulnerability researchers, etc. – since these algorithms will need to be closely tailored to whatever their tool does.

However, the software developers aren’t off the hook. They are responsible for:

       i.          Describing the range of affected versions in the text of the CVE record;

      ii.          If possible, including the range in the CPE name for the affected product and including that in the CVE record;

     iii.          If one or more of their products doesn’t follow an all-numeric versioning scheme like semantic versioning, preparing an ordering rule for the product; and

     iv.          When purl becomes an alternative identifier for affected open source products (which I believe will happen by later this year), including the VERS specification of the range in the purl for the affected product.

As you can see, there is a lot that will need to be done in the way of tools and standards development, before automated use of version ranges in vulnerability management will become a real possibility. However, given that there’s no real good alternative to taking these steps (will suppliers suddenly start including tens or even hundreds of CPE names in CVE records to identify every version in a range? They’ve always known they can do that, but they understandably don’t think it’s a good use of their time to do that), I think it’s only a matter of time before this happens.

My blog is more popular than ever, but I need more than popularity to keep it going. I’ve often been told that I should either accept advertising or put up a paywall and charge a subscription fee, or both. However, I really don’t want to do either of these things. It would be great if everyone who appreciates my posts could donate a $20-$25 (or more) “subscription fee” once a year. Will you do that today?

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] It is also possible that the product doesn’t follow an all-numeric versioning scheme, but the supplier has not provided an ordering rule. In this case, there is no algorithmic method available to answer the question for the second use case.