Tuesday, July 30, 2024

Just some of the many problems with CPE names


Two years ago, the OWASP SBOM Forum (then just called the SBOM Forum) published a paper that described some of the many problems with naming found in the National Vulnerability Database (NVD), and suggested solutions to them. The problems were all centered around “CPE names” (CPE stands for Common Platform Enumeration). These are machine-readable identifiers for software. The NVD has always relied on CPE; other vulnerability databases that are based on data from the NVD all utilize CPE (note: Almost all vulnerability databases that aren’t based on the NVD utilize the purl identifier instead).

We included in our paper a two-page description of six of the most important problems with CPE names (but unfortunately, they’re not the only problems!). Since that description remains quite valid and I find myself referring to it constantly in my posts, I am reproducing it below. I recommend you read this as good background for discussions of the NVD’s problems. Unfortunately, those discussions are unlikely to end anytime soon.

  1. Vulnerabilities are identified in the NVD with a CVE number, e.g. “CVE-2022-12345”. A CPE is typically not created for a software product until a CVE is determined to be applicable to the product. However, many software suppliers have never identified a CVE that applies to their products, so they have never created a CPE for them. This is almost certainly not because the products have never had vulnerabilities, but because the suppliers, for whatever reason, have not submitted any vulnerability reports for those products for inclusion in the National Vulnerability Database.

 

The worst part of this problem is that the result of an NVD search will be the same in both cases - the case where a vulnerability has never been identified in a product and the case where the supplier has never felt inclined to report a vulnerability, even if they may have identified some. The search will yield “There are 0 matching records” in both cases. Someone conducting a search won’t know which case applies and may believe the product has no vulnerabilities, when in fact the supplier has simply never reported one.

 

  1. There is no error checking when a new CPE name is entered in the NVD. Therefore, if the NIST/NVD staff member who entered the CPE name that was originally created for a software product did not completely follow the specification, a user who later searches for the same product and enters a properly-specified CPE will receive an error message. Unfortunately, it is the same error message that they would receive if the original CPE name were properly specified but there are no CVEs reported against it: “There are 0 matching records”.

In other words, when a user receives this message, they might interpret this to mean there is a valid CPE for the product they’re seeking, but a vulnerability (CVE) has never been identified for that product - i.e. it has a clean bill of health. However, in reality it would mean the CPE name was created improperly. In fact, there might be a large number of CVEs attached to the off-spec CPE but without knowing that name, the user will not be able to learn about those CVEs.


Another explanation for the “There are 0 matching records” error message is that the user had misspelled the CPE name in the search bar. Again, the user would have no way of knowing whether this was the reason for the message, or whether the message means the product has no reported vulnerabilities.

 

It is to avoid problems like this that many organizations that heavily use the NVD today employ advanced search techniques based on AI or fuzzy logic[i]. While that can greatly reduce the number of unsuccessful searches, having to resort to this makes it impossible to conduct truly automated searches. Considering that an average-sized software developer might easily need to conduct tens of thousands of NVD searches per day, and a service provider doing this on behalf of hundreds of customers would need to conduct some large multiple of that number of searches, the magnitude of this problem should be apparent.

 

  1. When a product or supplier name has changed since a proprietary product was originally developed (usually because of a merger or acquisition), the CPE name for the product may change as well, to reflect the identity of the new supplier. Thus, a user of the original product may not be able to learn about new vulnerabilities identified in it, unless they know the name of the current supplier as well as the current name for the product. Instead, this user will also receive the “There are 0 matching records” message.

 

  1. The same holds true for supplier or product names that can be written in different ways, such as “Microsoft(™)”, “Microsoft(™) Inc.”, “Microsoft(™) Inc”, “Microsoft Corporation”, etc. There is simply no fool-proof way to distinguish the correct supplier or product name among a large number of query responses.

 

  1. Sometimes, a single product will have many CPE names in the NVD because they have been entered by different people, each making a different mistake. For this reason, it will be hard to decide which name is correct. Even worse, there may be no “correct” name, since each of the names may have CVEs entered for it. This is the case with OpenSSL (e.g. “OpenSSL” vs “OpenSSL_Framework”) in the NVD now. Because there is no CPE name that contains all the OpenSSL vulnerabilities, the user needs to find vulnerabilities associated with each variation of the product's name. But how could they ever be sure they had identified all the CPEs that have ever been entered for OpenSSL?

 

  1. Often, a vulnerability will appear in only one module of a library. However, because CPE names are not assigned based on an individual module, the user may not know which module is vulnerable, unless they read the full CVE report. Thus, if the vulnerable module is not installed in a software product they use, but other modules of the library are installed (meaning the library itself is listed as a component in an SBOM), the user may unnecessarily patch the vulnerability or perform other unnecessary mitigations.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. 

I lead the OWASP SBOM Forum and its Vulnerability Database Working Group. These two groups work to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit, which are passed through to the SBOM Forum. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] Or, in the case of at least one third-party service provider, a “small army” of CPE-resolvers.

An important part of NERC CIP in the cloud: delegation agreements

Late last year, as the enforcement date for the two revised standards that “enable” BCSI in the cloud approached on January 1, 2024 (for those not keeping score at home, BCSI means BES Cyber System Information. The two revised standards are CIP-004-7 and CIP-011-3), a panic set in among some groups within the NERC CIP compliance community.

Without going into a lot of detail, the panic came about because certain NERC Regional Auditors and others became concerned that use of SaaS applications would be verboten for NERC entities with high and/or medium impact BES environments when the two revised standards took effect on January 1. This was ironic, since CIP-004-7 and CIP-011-3 were intended to remove a barrier to use of SaaS that was caused by two words in two Requirement Parts in previous versions of CIP-004: “storage locations”. Everyone was hoping that removal of those words would finally lead to widespread usage of SaaS.

The cause of the new panic was the words “provisioned access” in the new CIP-004-7 Requirement R6 Part 6.1, which I wrote about in this post. The way those two words were used in R6.1 seemed to imply that a SaaS provider would need to seek the NERC entity’s permission whenever the provider wanted to authorize a new employee to load the entity’s BCSI into the application. In fact, they would need to do this for every NERC entity that utilizes the SaaS application – i.e., each NERC entity would have to specifically authorize each new (or transferred) employee by name. This caused widespread dismay, since it seemed unlikely that a SaaS provider would ever agree to do this.

Fortunately, a deus ex machina appeared from backstage to make the problem go away. This came in the form of NERC’s approval, in late December, of a paper called “Usage of Cloud Solutions for BES Cyber System Information” as official Implementation Guidance for the two revised CIP standards. On page 13 of that document, in a discussion of compliance evidence for authorization of provisioned access to BCSI by employees of a SaaS provider or CSP, these words appear: “…Documented process for how CSP personnel provisioned access is authorized based on need, whether authorized directly by the Responsible Entity or indirectly by a contractual agreement with the CSP…” (my emphasis).

The contractual agreement referred to here is usually called a delegation agreement by those like me who pretend they know something about legal matters. The fact that this was “allowed” by the new Implementation Guidance removed the dark cloud (no pun intended. Honestly) that hung over use of SaaS by NERC entities – even though, as I noted in this post, it doesn’t seem like most NERC entities with medium or high impact BES environments have gotten over their previous reluctance to use SaaS.

In any case, it seems clear now that a NERC entity should be able to delegate (with a written agreement, of course) to the SaaS provider the authority to authorize provisioned access to their BCSI, if the SaaS provider follows the entity’s policies for authorizing such access. The delegation agreement will need to spell out specifically what those policies are. The provider will have to follow the policies of each customer that is a NERC entity, although there’s nothing to prevent the entities from collaborating to make sure they require a common set of policies, rather than each one requiring the provider to do something different.

What isn’t clear is whether the delegation agreement for CIP-004-7 Requirement R6 Part R6.1 should also apply to Parts 6.2.1, 6.2.2, and 6.3. That seems to make sense, but the Implementation Guidance only mentions a delegation agreement in R6.1. You need to discuss with your Region whether they interpret the statement regarding R6.1 to also apply to R6.2 and R6.3. It’s also possible that auditors will think that a delegation agreement isn’t needed for compliance with R6.2 and R6.3, as it is with 6.1.

This means that a NERC entity with a high and/or medium impact BES environment that wishes to use a SaaS application should make sure they have a delegation agreement in place with the SaaS provider for CIP-004-7 R6.1 compliance. It is also possible that a delegation agreement will be needed for compliance with CIP-011-3 R1.2 compliance; the need for this will be determined by the contents of the entity’s Information Protection Plan.

Since delegation agreements have not normally been needed for NERC CIP compliance, why are they suddenly needed now, because of CIP-004-7 and CIP-011-3 coming into effect last January?[i] Until then, there was no CIP standard that did not assume the NERC entity would always exercise complete control over their systems subject to CIP compliance. Thus, the need for a delegation agreement never even came up.

What does this mean for BES Cyber Systems (BCS), Electronic Access Control or Monitoring Systems (EACMS), or Physical Access Control Systems (PACS) in the cloud? Of course, since the new Standards Drafting Team that will develop new or revised CIP standards is still being constituted, all we can do is speculate now. But it seems to me that it’s likely that delegation agreements will play an important role in whatever new “Cloud CIP” standards emerge in a few years.  You’d better get used to them!

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] If you have needed to have a delegation agreement previously for compliance with a CIP requirement, please email me about that.

Sunday, July 28, 2024

Are vulnerability scanners useless now?

 

In our meeting of the OWASP SBOM Forum last Friday, we were – of course – discussing the fact that the National Vulnerability Database (NVD) now has a backlog of over 17,000 “unenriched” CVE reports. Moreover, even though they are enriching about 75 newly received CVE reports every day, they are still adding over 100 unenriched reports to the backlog every day, because of the volume of new reports being generated.

An unenriched CVE report is one that describes a vulnerability and includes a textual description of the affected product, but does not include a machine-readable CPE identifier for the product. The problem with this is that the only way the report can be read by an automated process (e.g., by a software composition analysis tool or a vulnerability scanner) is if it includes a CPE; otherwise, the automated process will not normally be able to learn what product(s) the report applies to. It isn’t useful for a process to be able to learn about a newly discovered CVE if at the same time it can’t learn about the affected product.

The NVD essentially stopped enriching CVE reports on February 12. In May, it started enriching about 75 a day; I’ve heard that number has slightly increased. But the facts that there was almost no enrichment for more than three months and that the backlog is still increasing by about 100 per day, mean that an NVD search for vulnerabilities applicable to a particular product is unlikely to discover any of the more than 17,000 vulnerabilities identified since early February.

Frankly, if you are trying to learn about all the vulnerabilities that may be found in a software product you utilize, not knowing about the great majority of vulnerabilities identified since early February makes it almost a complete waste of time even to conduct an NVD search. In fact, if you interpret the lack of results to be an indication that your product has no vulnerabilities – when in fact it may be loaded with them – then even doing the search probably causes more harm than good.

In Friday’s meeting, Yotam Perkal of Rezilion asked a question about what vendors of security scanners are doing about this problem, since a lot of them use the NVD to find vulnerabilities in software being scanned – or at least they used to. He wondered if they’re still using the NVD, and if so, why they are doing that. Are they getting something useful out of it, or are they simply going through the motions, so that their customers continue to believe they’re gathering useful information in their scans – when in fact they’re probably living in a fool’s paradise?[i]

Nobody in the meeting had the answer to this question, including me.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum and its Vulnerability Database Working Group. These two groups work to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit, which are passed through to the SBOM Forum. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] I honestly don’t remember how Yotam phrased his question, but this was the gist of it.

Tuesday, July 23, 2024

Reminder: First NERC CTAG CIP / Cloud webinar tomorrow

I want to remind you that the first of six webinars sponsored by SANS and the informal NERC Cloud Technical Advisory Group (CTAG), which I described in this post, will take place tomorrow (July 24) at 1PM EDT. The registration link is here. You need to have or create a SANS account to register. 

You should register even if you can't attend tomorrow, so you get notifications about the future webinars (although my post listed the dates - they're all at the same time). All of the webinars will be recorded and posted on the SANS website.

I hope you can attend!

The vulnerability database emergency


Last Monday, I published this post titled “Currently, automated vulnerability management is impossible. How can we fix that?” Two days later, I put up a follow-up post titled “The CVE/CPE backlog is currently 17,000. It’s growing by >100 each day.” The point of both posts is that it seems the National Vulnerability Database (NVD) has given up on accomplishing its most important function, which is to inform software users (and software developers) of all reported vulnerabilities (CVEs in particular), that are found in their products, using a machine-readable software identifier. Today, Cybellum released a podcast I recorded with them last week, discussing this problem.[i]

Here is a summary of the two posts:

1.      CVE reports are prepared by CVE Numbering Authorities (CNAs) and submitted to the CVE.org database. From there, they are sent to the NVD. From the NVD’s inception about two decades ago until February 12 of this year, the NVD (which is staffed by NIST employees and contractors) took responsibility for “enriching” those reports by adding information the CNAs weren’t supposed to enter on their own.

2.      The most important of the pieces of information that the NVD added to the CVE report was a CPE name (a machine-readable software identifier), which identifies the product/version that is affected by the CVE (i.e., the vulnerability that is the subject of the report). All CVE reports contain a textual description of the product(s) that is affected by the CVE. However, in order to be useful to an automated vulnerability management process, the report needs to contain a machine-readable product identifier. Until February 12, the NVD had almost always added CPE names to CVE reports, making those reports searchable by product or vendor name.

3.      Starting on February 12, the NVD drastically reduced its enrichment of CVE reports; in fact, from February 12 through mid-May, they added CPE names to virtually none of them. Starting in mid-May, they resumed enriching them regularly again, but only at a pace of about 75 per day. This is less than half of the average number of new CVE reports received by the NVD each day- about 175.

4.      While it’s certainly better than no enrichment at all, there are two problems with that pace. First, it obviously won’t eliminate the “enrichment gap”, since they’re accumulating a backlog of 100 “unenriched” CVE reports a day. The NVD has said they intend to close this gap by sometime in 2025. This means they intend to pick up their pace of enrichment (mainly through adding contactors), so that it closely matches or exceeds the rate at which new CVEs are received.

5.      However, the second problem is much more serious: Between February 12 and mid-May, 175 new CVE reports were passed to the NVD by CVE.org on the average business day; only a small fraction of these reports were enriched at all (in fact, at least a couple weeks went by with literally zero enrichment). This means there’s now a huge backlog of CVE reports without CPE names, which built up during that three-month period. The NVD has said nothing about how or even whether they will reduce that backlog.

6.      When I wrote the first post last Monday, I was thinking the backlog for the three-month period must be around 2-3,000 CVE reports. However, a conversation with Andrey Lukashenkov of Vulners showed me that the backlog is about 17,000 now; as I just pointed out, this backlog is still growing by about 100 a day. Even if the NVD tripled their current rate of enrichment to 225 per day, it would take about a year and a half to eliminate the total backlog. Of course, this also assumes the NVD will try to eliminate the backlog. Since they have yet to even acknowledge this backlog in their announcements, that assumption is questionable.

What’s the problem with having this backlog? Think of what will happen if you go to the NVD to search for vulnerabilities currently found in a software product you use (where “you” includes both end-user organizations and software developers. The latter are without much doubt the biggest users of vulnerability database services).

Today, no matter what product you search for in the NVD, the search will almost always appear to go swimmingly: no vulnerabilities at all will be found! But does that mean the product is vulnerability-free? Only if you just care about vulnerabilities that were identified before February 12. If you care about any that were identified later, you may be living in a fool’s paradise (of course, my guess is that 99% of searches in the NVD are for current vulnerabilities. Hopefully, any serious vulnerability that was identified in a product before February 12 has been mitigated in one way or the other).

I wasn’t joking when I titled the first post, “…automated vulnerability management is impossible.” How can you manage vulnerabilities in the software you use, if you can’t learn about them in the first place?

Although they had invited me to do their podcast before I wrote those two posts, Cybellum was so alarmed by what I said in the posts that they decided to accelerate not just the recording, but also the publication of the podcast. So, instead of scheduling the recording and post-production taking more than a month, they released the podcast today. I must admit that my record with doing podcasts is spotty, since I often tend to insert a lot of um’s and ah’s. But this podcast came out very well. I’d like to hear what you think of it.

Of course, Cybellum has every reason to be worried about this issue: Almost every organization of any type in the world uses software, and many of those organizations develop software as well. If there’s no easy way for them to learn about vulnerabilities in software, their entire business model needs to be re-thought. Yet, that seems to be exactly where we’re left now.

Of course, the problem isn’t really that the NVD has now proven to be unreliable; it’s that so many organizations put their complete trust in the NVD in the first place. I was asked in the podcast if there is an easy-to-implement alternative to the NVD. The answer is no. Whatever solution your organization adopts for vulnerability management going forward, you can be sure that a) implementing it will probably not involve just using a single source of vulnerability information, and b) the solution will require more of your time, and perhaps money, than putting all your eggs in the NVD basket required[ii].

Without much doubt, this problem needs to be remediated. As I discussed in last Monday’s post, the real solution is to move to using purl as the primary product identifier for CVEs. However, I’ll be honest: This will take at least 2-3 years to be fully implemented. What can we do in the meantime? Can the software world wait 2-3 years before fully automated vulnerability management is possible? Of course, the answer to that question is no.

The good news is that the NVD is just one of many available vulnerability databases. However, the not-so-good news is that there’s no single database that can provide the one-stop solution that many people believed (although not correctly) that the NVD provided. Each database has its strengths and weaknesses – including the things it does really well, the things it doesn’t do well and the things it doesn’t do at all. What’s required is an intelligent catalog that itemizes the strengths and weaknesses of each database and suggests how multiple database options can be combined to meet particular needs. I’ll have more to say about that soon.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum and the OWASP Vulnerability Database Working Group. Both groups endeavor to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] The LinkedIn post, which shows a well-chosen video excerpt from the podcast, is here.

[ii] If there’s a bright side to this debacle, it’s that the value of utilizing the NVD, as opposed to using other sources for some or all your vulnerability data needs, has been greatly overstated for some time. It is very possible that your new solution(s) will fit your needs much better than exclusive reliance on the NVD did.

Monday, July 22, 2024

Was CrowdStrike a cyberattack? Absolutely!


A number of people who were quoted regarding the CrowdStrike incident took pains to point out that it wasn’t caused by a “cyberattack” – by which they probably meant that it hadn’t been caused by the deliberate actions of an attacker. In other words, if a very damaging cyber incident was caused by the inadvertent actions of someone who had no malicious intent, that somehow makes it more tolerable than if the bad guys caused it intentionally.

However, such statements reflect a fundamental misunderstanding of cybersecurity threat sources and vulnerabilities. Any person or thing that can damage cyber systems is a cyber threat source; it doesn’t matter whether the damage is intentional or unintentional. And any situation that could enable the threat source to succeed in causing damage is a vulnerability.

It’s almost certainly true that whoever is responsible for the fact that Friday’s (or Thursday night’s) CrowdStrike update wasn’t adequately tested (I’m assuming that’s the root cause of the problem, even though there was clearly also some technical cause) didn’t intend to cause any damage. But the ultimate effect of this lack of testing could never be distinguished a priori from an attack launched by the most vicious North Korean threat group.

In this case, the threat source was perhaps rushed preparation by CrowdStrike staff members. The vulnerability was perhaps the fact that what might have been just an ordinary mistake in an update was greatly magnified by the fact that CrowdStrike runs at a high privilege level within Windows systems. Perhaps this privilege level needs to be looked at as the cybersecurity community searches for lessons to be learned.

But both the threat and the vulnerability need to be investigated just as seriously as the threat source (Russia) and vulnerability (lack of detective controls in the CI/CD pipeline) involved in the SolarWinds attack. In fact, by far the best examination of the SolarWinds attack was conducted by…CrowdStrike!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum, and its Vulnerability Database Working Group. These two groups work to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Wednesday, July 17, 2024

The CVE/CPE backlog is currently 17,000. It’s growing by >100 each day.


As my post on Monday lamented, we’re suffering from a severe shortage of CPE names. CPE stands for Common Platform Enumeration. Any lookup to the National Vulnerability Database (or to any other database based on the NVD) needs to include a CPE name.

The reason we have a shortage – actually, an almost complete drought – is because, until February 12 of this year, the NVD staff itself (NVD is a part of NIST) was the only “authoritative” source for CPE names – and they ceased being that on that day.

As I also explained in the post, the source of the data in the NVD is CVE reports submitted to the CVE.org database (which used to be called just “MITRE”, since MITRE Corporation contractors originally developed and still operate the database) by CVE Numbering Authorities (CNAs). In most cases, the CNA is the developer of the software in which a new vulnerability was discovered.

In the CVE report, the CNA always describes the software product (or sometimes multiple products) in a text field (e.g. “Joe’s Word Processor version 2.3”), but they aren’t expected to (or even allowed to, in many or most cases) create the CPE name for the product/version; until February 12, an NVD/NIST staff member or contractor almost always did that.

After February 12, the NVD essentially stopped adding CPE names to CVE reports; in June, they resumed adding them, although they’re now adding a CPE to just about 25% of new reports. While this is some progress, it’s important to keep in mind there’s now a backlog of over 16,000 CVE reports that don’t yet have CPE names. That backlog is growing by over 100 CVEs a day, even though the rate of growth of the backlog has been reduced  (I want to thank Andrey Lukashenkov of Vulners for discussing these issues with me constantly. Today, he pointed me to this report on the NVD, which is updated every six hours. That’s a good one to follow).

This means that, if a software user searches for a particular version of a software product in the NVD today, there is only a tiny chance (and it diminishes as the backlog keeps growing) that the user will learn of a CVE that has been reported for that product/version since early February. Of course, a user can avoid this problem if they read the text of every CVE report released since early February (around 200 CVE reports are added daily) and decide whether one of the many software product/versions they use is referenced in any of that text. However, that won’t leave much time for them to do their day job.

Since most users are concerned about recently reported vulnerabilities for the software they use, this is of course a huge problem. It means that true vulnerability management (which needs to be automated) is currently impossible, at least, if you continue to rely on just the NVD for your vulnerability identification needs. Even if the NVD were to double their current rate of creating CPEs tomorrow, it would still be more than a year before they dig themselves out of the hole they’re in.

Which brings me to purl, which I discussed in my last post and is discussed at length in this document. Purl is now supported by the CVE specification (v5.1), so the CNAs could in theory start adding them to their reports tomorrow. Would it be any more efficient if they were to do that, rather than have the NVD continue their Sisyphean effort to decrease their naming backlog by tripling or quadrupling their current rate of production?

In fact, it would be 100% more efficient. This is because a purl is a completely deterministic identifier. As long as you know a) the package manager from which you downloaded an open source project, b) the exact name of that project in the package manager, and c) the version string for the project in the package manager, you can create a purl that should always be an exact match for the purl that the supplier/CNA used when they created the CVE report[i]. Of course, this is very different from CPE, in which a product supplied by “Microsoft” has a different CPE name than one supplied by “Microsoft, Inc.”, or one supplied by “Microsoft Inc”, “Microsoft Corporation”, “Microsoft Europe”, etc.

Of course, if you’re gifted with clairvoyance and you know exactly which vendor name the NIST/NVD staff member used when they created the CPE name you’re looking for, you’ll find it easily. But for the rest of us…not so much.

So which is better – hope and pray that the NVD will finally get their act together and catch up on their CPE backlog in a year or two, or start moving now toward making purl the primary identifier for the NVD and its clones, with full understanding that it will be years before we’ll be able to say goodbye to CPE forever? As I’ve said before, I don’t know of a single vulnerability database anywhere in the world that is not based on purl, other than the NVD and its near-clones. There must be a reason for that.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum, which works to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] Purl supports other fields, of course, but the three just listed are the only ones required.

Monday, July 15, 2024

Currently, automated vulnerability management is impossible. How can we fix that?


Currently, the OWASP SBOM Forum has two sets of meetings going on. On Fridays at 1PM ET, we have our weekly SBOM Forum full meetings. We intentionally don’t usually have a set topic for these meetings (and if we do, we don’t necessarily follow it). This is because I find it much more useful to let the topic – or often, the topics – emerge from the meeting.

Our other meetings are held every other Tuesday at 11AM ET. These do have a regular topic, which is vulnerability database problems (including, but not limited to the NVD); of course, that’s a very broad topic, but we at least keep within its bounds. Anybody is welcome to attend either or both meetings; email me if you would like me to send you the invitations.

The past week had both meetings, which both proved quite productive. To understand what I’m going to say below, here’s some background you need:

1.      The CVE Numbering Authorities (there are currently close to 400 of them. These are mostly software developers, but there are others like ENISA, GitHub and MITRE) are responsible for preparing CVE reports on new vulnerabilities they have identified; for each vulnerability (CVE) identified in the report, the CNA must include information on at least one affected product, including at least the product’s name, version string, and vendor name (the CNA is often the vendor of the product named in the CVE report); those three data points are entered as text fields. All CVE reports are incorporated into the CVE.org database (I believe there are around 250,000 CVE reports in the database now).

2.      From CVE.org, the new CVE reports “flow down” to the NVD. Until February 12 of this year, NVD staff members (who are all NIST employees or contractors) quickly “enriched” the reports by adding important information, including the CVSS score, CWEs, and a machine-readable CPE name for each product/version listed in the report; all that information became available in the NVD. Note that the CPE name is essential if the information in a CVE report is to be discoverable by automated methods. If it’s not so discoverable, I regard the CVE report as useless for vulnerability management purposes, since vulnerability management needs to be automated (even though there will always be times when “manual” processes will be required. But they should always be the exception, not the rule).

3.      On February 12 something happened, which has yet to be adequately explained by the NVD. On that date, the volume of CVE reports that were “enriched” by the NVD staff fell from hundreds a day to literally zero. It remained at that level until May, when it kicked back up to a level of about 25% of the new CVE reports. In other words, after three months of producing no useful CVE reports at all and building up a big backlog of “non-enriched” reports, the NVD has now reduced the rate at which they’re building up that backlog by 25%. However, note they have made no progress at all in reducing the backlog itself – just growing it at a slightly slower rate; of course, they’re also not even giving a date by which the backlog will be eliminated. Excuse me for not being overwhelmed with joy at this most recent news.

4.      For this reason, I and the OWASP SBOM Forum have given up on the idea that the NVD will ever dig itself out of its CPE hole, although it will certainly help if they can. This is why last Friday’s meeting of the Forum was focused entirely on the question of what (or who) could replace the NVD as a source of CPE data for CVE records that don’t have CPEs now.

5.      We had a good discussion, which you can read in the meeting notes (BTW, anyone who was at the meeting should feel free to add your comments to what I’ve written for the notes. Since I’m always trying to follow the discussion closely, I can’t write down a lot of what’s said). However, when we came to the question of who was going to fill in the gap created by the fact that the NVD has largely given up on their commitment to provide CPE names for CVE reports, I was surprised by what I heard.

6.      In Tuesday’s meeting (and in earlier meetings), it had been suggested (and I’m not sure by whom) that the CNAs should create the CPE names. After all, a large percentage (probably the majority) of CVE reports are created by a CNA that is also the developer of the product that has the new vulnerability. It seems almost axiomatic that the CNA organization should create the CPE name as well.

7.      However, Bruce Lowenthal, who leads the PSIRT at Oracle, pushed back on the idea.  Bruce doesn't want two different fields for the same data in the JSON 5.x format, because he feels it will lead to inconsistencies. He notes that many feel CPEs - at least, the CPEs created before February 12 of this year, when NIST/NVD staff memberse were creating the great majority of them - are deficient. He suggests that CNAs should fill in all the fields in the CVE JSON 5.x record that are needed to create a CPE. Then, CVE.org should provide automation that creates the CPE name based on these fields, with no additional human intervention. 

      The idea behind this is the CNAs know what should be in the fields in the CVE report (especially the name and version string of the vulnerable software, as well as the name of the vendor). Since most CVE reports describe a vulnerability found in a product developed by the CNA itself, they should have the final say on the contents of these three fields, not somebody at NIST who's using other criteria (which are often opaque at best, and often are simply wrong) to determine these values.

8.      This idea has a lot of appeal. In fact, I and others have wondered for a long time why the NVD staff needs to create CPE names “manually”, when in theory an automated tool could do the same job with more accuracy – assuming that the CNA provides authoritative information in the product, version and vendor fields, which are all text fields. At this point in the meeting, it seemed obvious to me that the solution to the problem of creating CPEs (given that the NVD can no longer be trusted to do that regularly) was to require that somebody (presumably CVE.org, but perhaps the CNAs themselves) should be required to use an automated tool to create a CPE name for every affected product in their CVE report; the CPE name would need to be based on the exact product, version and vendor text fields in the report.

9.      However, by Monday (i.e., the day I’m writing this), I had changed my mind regarding this idea. Andrey Lukashenkov of Vulners, whose judgment I have great respect for, assured me on LinkedIn that it will be hard, if not impossible, to develop an automated tool to create CPE names (you can read our conversation here). Was this a dead end? In other words, since CPE seems to be on life support at best and there’s not currently any alternative to it, does this mean that automated vulnerability management is no longer possible, even if it was before?

No, it doesn’t. About 15 minutes before the meeting ended on Friday, the Director of Product Security for a different very large US-based software vendor joined the meeting. He seemed to agree with what I’ve just described, but he went on to make another suggestion; given that he’s a member of the CVE.org board, his suggestion carried a lot of weight with me. He suggested that, since the new CVE version 5.1 specification, which was adapted a few months ago by CVE.org, supports purl identifiers as an option, maybe the best course of action would be to train all the CNAs to support purl identifiers.

I was quite pleased to hear this person’s suggestion, since the whole reason why the new CVE spec supports purl is because the SBOM Forum suggested it in our 2022 white paper on changes we were suggesting to fix the naming problem in the NVD (i.e., the problems with CPE, which are described on pages 4-6 of the paper). During that effort, Steve Springett and Tony Turner submitted a pull request to CVE.org to include purl in what was then known as the CVE JSON spec, but is known today (I believe) as simply the CVE spec. We were too late to get it into the 5.0 spec, but it was included in the 5.1 spec, which was just adopted a couple of months ago.

For reasons described at length in our paper, purl is by far superior to CPE, especially for open source software, where purl has literally conquered the world (to the extent that I have yet to hear of a single open source vulnerability database that is not based on purl). Purl excels at naming software that is available in package managers, but it currently doesn’t work for:

1.      Proprietary (“closed source”) software;

2.      Open source software written in C/C++, which is typically not available in package managers; and

3.      Any other open source software not available in package managers.

4.      Plus, there’s another type of product that is today identified by CPE names, but which can’t currently be identified by purls; I’m referring to intelligent devices. Our 2022 white paper suggests that the existing GS1 standards be used to identify devices in vulnerability databases (we were suggesting it for the NVD at the time, but they could be used in any vulnerability database). Those could work, but it would be nice if we could figure out a way to get purl to accommodate devices, since the GS1 standards come with a lot of baggage (and cost) that wouldn’t be needed in our application.

What’s essential in purl is that it’s a distributed naming system, in which individual software sources are responsible for the names of the software available on their source; in other words, the software source
“controls its namespace”. For example, a package manager like Maven Central is responsible for the name of every software product available on Maven Central. This means that the combination of the name of the software source and the name of a product available within that source (as well as a version string applicable to that product) will always be unique; that is, Maven Central controls its namespace, so every product name in Maven Central will be unique in that namespace.

For each of the four above items that don’t currently work with purl, the way to make them work is to identify a way to have a controlled namespace for that product type. When the OWASP SBOM Forum wrote our 2022 paper, we suggested an idea for a controlled namespace for proprietary software; that is the short section titled “SWID” on pages 11 and 12 of the paper.

Our suggestion at the time was that software suppliers could create SWID tags and distribute them with the binaries they distribute to customers; with access to the SWID tag, the software customer could create a unique purl, which would allow the customer to identify that supplier’s product and version in a vulnerability database (at the same time as we wrote the paper, Steve Springett, a purl maintainer, submitted a pull request for a new purl type called SWID, which was accepted).

The big problem with this suggestion was that it doesn’t work for legacy software versions, since those binaries have already been distributed  (we realized this at the time, but we didn’t want to spend the 2-4 months that would be required to figure out a good solution to it). How will users of those versions find a SWID tag?

In my opinion, this wouldn’t create a big technical problem; I think there are multiple good options for accomplishing this. For example, a supplier could create a well-known location on their website like “SWID.txt”. This would include the name and version number of previous versions of their products, along with a SWID tag for each. A vulnerability management tool could retrieve that tag and easily create the purl that corresponds to it; the user could then look up vulnerabilities for the legacy software they use in a vulnerability database based on purl, although that database will need to support the SWID purl type.

Of course, I don’t deny for a moment that there will be a huge number of questions to be resolved regarding how to extend purl to each of the four product types listed above, as well as how to implement purl support in CVE.org and other databases that don’t currently support it. Even if one or a small number of individuals could answer all these questions on their own, their answers will never be as good as those that will be produced by a group of industry leaders formed to address exactly these questions. In fact, that's the main rationale for the OWASP SBOM Forum’s Vulnerability Database Working Group (let me know if you would like to join the group).

So, as of our next meeting on July 23, the OWASP Vulnerability Database Working Group will start working on questions suggested above, including:

·        What is needed for CNAs to be comfortable with including purls in CVE reports (at the moment, it will just be for open source software products available in a package manager)?

·        What is needed for the CVE.org database to be able to support purl lookups?

·        Is the SWID tag proposal the best way to incorporate commercial software lookup capabilities into purl?

·        If it is, how can commercial software users best be informed (in a machine readable format, of course) of the contents of the purl tag(s) for software products they utilize?

·        Similar questions regarding open source software not in package managers, as well as intelligent devices.

Of course, none of these questions are simple, and it could easily be 1-2 years before they are all answered to at least some degree of satisfaction. But, given that it looks like vulnerability management is close to impossible today, don’t you think this is a good thing to do?

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum and its Vulnerability Database Working Group, which works to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post. 

Saturday, July 13, 2024

Reminder: Webinar on SBOMs and Device Security

I would like to remind you that I’ll be participating in a webinar titled “Medical Device Software Security: Leveraging SBOMs and Cross-Industry Practices” next Wednesday at noon EDT/9 AM PDT, sponsored by Medcrypt. I want to point out that, other than the fact that medical devices are regulated by the FDA, there is no real difference, as far as software security is concerned, between medical devices and any other intelligent devices that are used for critical infrastructure purposes. The three participants have already met twice to discuss the content of our presentations, and I can assure you that – other than knowing that “HDO” refers to a hospital organization and “MDM” refers to a medical device maker – you won’t need any medical device background to understand the discussion.

However, I also want to point out that, in case you haven’t noticed from the topics of my posts lately, I’m primarily concerned about vulnerability reporting and identification now - especially in light of what looks more and more like the collapse of the National Vulnerability Database (NVD). SBOMs themselves are still important, but if there isn’t an easy-to-use source of up-to-date vulnerability data available, the primary use case for SBOMs, vulnerability management, becomes moot.

In the webinar, I will focus on questions about how device makers should report vulnerabilities, how they should coordinate reports with their patching schedules, etc. It should be an interesting discussion, and I want to thank Medcrypt for inviting me to participate in it.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum, which works to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit. Email me to discuss that.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Friday, July 12, 2024

NERC CIP and the cloud: The sad story of how we came close to fixing the problem in 2018, but didn’t


It’s good to see that interest is finally picking up in removing the barriers that prevent NERC entities with medium and/or high impact BES environments from utilizing many cloud-based services, especially those whose use would amount to utilizing either BES Cyber Systems (BCS) or Electronic Access Control or Monitoring Systems (EACMS) – like security monitoring services - in the cloud.

One good indicator of this interest is the fact that there seem to be a lot of people interested in serving on the new Standards Drafting Team, which is now being constituted and will presumably start meeting later this year. Unfortunately, I think it will be 5-6 years before their work is finished and the new “CIP cloud” standards are implemented, but I’m glad the process has at least started.

Fortunately, the new SDT doesn’t have to start from scratch. In fact, they can study the lessons to be learned from the story of the CIP Modifications SDT[i], which in 2018 outlined – and started to develop – a set of CIP changes that would have completely solved the cloud problem (although that’s not why they introduced them). Were it not that they had overlooked one crucial step, there would probably be no need for a new SDT today; use of the cloud would already be a non-issue for any NERC entity, just as it is for almost any organization in any other industry today.

You can learn what happened in detail by reading (or at least skimming) these three posts that I wrote on the SDT’s ideas in 2018: Part 1, Part 2, and Part 3. Here’s what led up to the events of 2018:

1.      When CIP version 6 was approved in early 2016, FERC – as they usually do, at least when they approve new or revised CIP standards – ordered NERC to make some further changes, which I won’t discuss here. The SDT that was appointed to develop those changes was called “Modifications to CIP Standards”. That SDT is still in operation today.

2.      Usually, when FERC orders a new standard or changes to existing standards, a new SDT is chosen; this happened in 2016 as well. To guide the new SDT, a Standards Authorization Request (SAR) is normally developed that closely mirrors what FERC ordered. The general idea is that the new SDT should just focus on giving FERC what they asked for; it is risky to ask them to do more than that.

3.      However, it seems one of the members of the new SDT must have had bad karma, since the CIP Mods team’s SAR had one slight modification: They needed to figure out how to incorporate virtualization into the CIP standards. After all, how hard could that be?

4.      As it turns out, it’s quite hard. The team is still working to achieve that goal, even though they achieved their other goals about six years ago and even though I suggested at one point that it wouldn’t be such a bad thing if they just followed Sen. George Aiken’s advice for ending the Vietnam War: “Declare victory and leave”. After all, they had given FERC everything they asked for. Incorporating virtualization into CIP was never intended to be their primary goal.

When the SDT started focusing in earnest on virtualization in 2017 (or early 2018), they realized their job would be made much easier if they focused on two changes that would affect all the CIP standards (at the time, there were only ten standards: CIP-002 through CIP-011).

The first (and more significant) of these was changing CIP asset identification from being focused entirely on physical devices to being focused on systems. Even though CIP-002-5 had introduced the concept of BES Cyber System (BCS) and nominally made that the basis for all the CIP requirements, a BCS was defined as simply an aggregation of BES Cyber Assets – and those are individual physical devices. A NERC entity can’t identify its BCS without first identifying its BCAs. Since the BCA definition references Cyber Assets, the first step in the whole process is identifying devices that meet the Cyber Asset definition: “Programmable electronic devices…”.

The SDT proposed breaking that device connection through two steps. First, they would rewrite the BCS definition to include what was in the BES Cyber Asset definition, including 15-minute impact on the BES. Second, they would eliminate the terms Cyber Asset and BES Cyber Asset altogether, so a NERC entity would never again have to identify devices as the basis of their CIP compliance program; instead, they would simply look at all the systems they operate, both physical and virtual, and decide which of those have a 15-minute impact on the BES. There would be no need to consider how a system is physically implemented.

The result of this process would be that the entity has a list of their BES Cyber Systems, which they will then classify into high, medium and low impact[ii], based on the criteria in CIP-002 R1 Attachment 1.

This alone was a great idea, but the SDT realized it wasn’t enough; further changes would need to be made to the CIP standards to make this change workable. They made it clear in their 2018 webinar (the slides for which are still available here. The recording has been taken down) that some of the existing CIP requirements would need to be rewritten to be more “objective based” and “technology neutral” (see slide 29).

Within the NERC community, these two terms have come to describe a requirement that simply states the objective the NERC entity must achieve, rather than the means they use to achieve it. The webinar gave the example of CIP-007 R3 as an objective based and technology neutral requirement (slide 26), and CIP-005 R1 (slide 27) as the opposite. In fact, the webinar described how (part of) CIP-005 R1 might be rewritten to make it objective based (slide 28).

When I listened to the webinar in 2018, I was very focused on the problems that the need to comply with prescriptive CIP requirements (including CIP-005 R1, CIP-007 R2 and CIP-010 R1) poses for NERC entities, although I usually referred to my proposed solution as “risk based” or “plan based” requirements, not usually as “objective based” requirements. In practice, I believe these are all names for the same thing. I was really excited about the webinar (as shown by the fact that I wrote three posts about it, all linked earlier), since it seemed clear that implementing the changes they suggested would fix both the problem I was concerned with and the problem they were concerned with (virtualization).

However, it turns out the SDT’s proposal would also have fixed another problem: CIP and the cloud. This is because the biggest problem with CIP in the cloud is the fact that the focus on individual devices (whether physical or virtual) in the CIP requirements makes it almost impossible for the cloud service provider to furnish their CIP customers the evidence they need to demonstrate compliance with prescriptive requirements like CIP-007 R2 and CIP-010 R1.

After all, a CSP doesn’t want to track every possible physical or virtual device that a BES Cyber System (or more accurately, any part of a BCS) might have been installed on over a three-year NERC audit period and document how each of those physical or virtual devices has been compliant with a large number of CIP requirements over that period. Yet, that is what a lot of CSPs would have to do if any of their offerings met the definition of EACMS, BCS or PACS.

The two elements of the CIP Modifications SDT’s 2018 proposal would have eliminated this problem, for two reasons. First, the CSP would no longer be obligated to track physical or virtual devices on which any part of a BCS or EACMS was installed during the audit period, and show that every requirement of CIP was complied with for each of those devices.

Second, the CSP would also no longer need to document compliance with prescriptive requirements like CIP-007 R2, which require documentation of compliance for every device in scope for the requirement in every instance in which compliance was required. Instead, since all CIP requirements would be rewritten as risk-based, the CSP would simply have to document they had developed and followed appropriate policies and procedures. In many cases, these p’s and p’s would already be in place, due to the CSP’s obligation to comply with FedRAMP or ISO 27001.

However, it turns out that the SDT’s proposal met an inglorious end: As the SDT started to get to work on drafting the two parts of their proposal (moving to systems as the basis of CIP compliance and making certain prescriptive requirements objectives-based), the larger electric utilities (especially the IOUs) realized that the changes the SDT was proposing would essentially require them to throw away a lot of the training, software, etc. that they had implemented for CIP compliance over the years.

These utilities (including their main trade organization, EEI) decided there was no way they were going to make such a drastic change, especially since it would probably have to be made all at once when the new standards were implemented. They put their (collective) foot down, and that was the end of the SDT’s proposal. Instead, the SDT went back to what had been their original idea: they would keep CIP’s focus on devices, but define virtual devices, not just physical ones. The requirements could remain almost the same, but their applicability would be expanded. This requires making substantive changes to most CIP requirements and getting them approved by the NERC ballot body, in most instances. This effort is ongoing today, and shows no sign of ending soon.

When I heard this had happened (at the end of 2018, at a NERC CIPC meeting in Atlanta), I was quite disappointed, and initially thought the industry had been let down by the big boys. However, I later came to realize it was quite unrealistic to expect so many organizations to make such a drastic change all at once. Since I didn’t see any way that these attitudes would change easily, I started to think there wasn’t much hope that NERC entities with high and/or medium impact BES environments would ever be able to make full use of the cloud.

However, I clearly wasn’t thinking creatively enough; it’s good that some other people were more creative than I was. Last year, a new SAR was approved by the NERC Standards Committee in December; that SAR is the basis for the drafting team that’s now being formed. One feature of the new SAR was one that I hadn’t thought of at all: there should be two “tracks” for NERC CIP compliance, one applicable to on-premises systems and one applicable to cloud-based systems.

Under the SAR, any NERC entity that has no use for the cloud and is happy keeping all their OT systems on-premises will be welcome to do so; the CIP requirements they had to comply with will be almost unchanged from the ones they were already following. Meanwhile, those entities that do want to start using the cloud can start following the new “cloud CIP track” for their cloud-based systems, although they need to continue to follow the “on-premises track” for on-premises systems.

This sounded messy to me until I did a kind of “test” of the idea in preparation for a webinar that I did with Lew Folkerth of RF last October. I was quite surprised to find that implementing the two tracks using the existing CIP-002 wording was much easier than I anticipated. This is because CIP-002 nowhere mentions either Cyber Assets or BES Cyber Assets. Instead, it starts with identification of BES Cyber Systems and lets the NERC entity discover for themselves that they can’t do that without first identifying Cyber Assets and BES Cyber Assets.

Thus, it’s easy to simply divide “BES Cyber Systems” into two types: “On premises BCS” and “Cloud BCS”. To identify the former, the NERC entity needs first to identify Cyber Assets and BES Cyber Assets. However, to identify Cloud BCS, the entity can start with systems implemented in the cloud. My guess is that, as NERC entities gain more experience complying with the “Cloud CIP” standards and realize how much simpler it is to comply for cloud-based systems, they will be more inclined to utilize the cloud for their OT systems.

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I initially called this the “CIP v7 SDT”. This was because I thought the changes would all be made as part of a new set of revisions to all the standards, as had always been done previously (i.e., CIP versions 1-6). However, the CIP Modifications drafting team could see there would be different timelines for approval of the changes they were working on; the idea of “revving” all the standards at once no longer made sense. That is why there are at least 7 or 8 version numbers attached to the current CIP standards. There will almost certainly never be a single version number for all CIP standards, nor is that necessary. 

[ii] Of course, identifying low impact BCS is a very complicated topic, which I don’t want to get into here.