Friday, June 28, 2024

Cloud CIP, part 5: Has anyone seen information about the new BCSI requirements? Should we form a search party?

I (and a few others involved in the NERC CIP compliance space) have been puzzled by a strange situation. It has to do with the fact that, after multiple years in which BES Cyber System Information (BCSI) could not be stored or manipulated in the cloud, two revised CIP standards, CIP-004-7 and CIP-011-3, came into effect on Jan. 1. They are designed to finally make BCSI in the cloud “legal” for NERC entities with high and/or medium impact BES assets.

When an important change like this has happened in the past, my experience has always been that, both before and after the effective date, there has been a lot of outreach by NERC and the Regional Entities and often the trade associations, aimed at helping NERC entities understand their obligations under the new standard(s). After all, changes in the CIP Reliability Standards are intended to make the Bulk Electric System (BES) more reliable; it is in literally nobody’s interest if NERC entities don’t understand the new standards and as a result, either don’t implement them at all or implement them badly.

However, I guess there’s a first time for everything. So far, I have not seen written or oral efforts (such as white papers or webinars) to explain how to comply with the two revised standards.

The one exception to this is the document called “Usage of Cloud Solutions for BES Cyber System Information (BCSI)”. This was prepared by the NERC Reliability and Security Technical Committee last June. It was approved by NERC last December as Implementation Guidance, which means that NERC auditors are supposed to give “deference” to its contents.

This is a good document and of course it’s worth reading, but, like most NERC CIP Guidance documents that I’ve read, it steers clear of providing any definite recommendations for compliance procedures; instead, it provides “examples or approaches to illustrate how registered entities could comply with a standard” (as stated in the Introduction). This makes it useful for identifying steps a NERC entity might take as they implement compliance. However, it doesn’t even try to recommend components of an effective compliance program.

A good example of this is found on page 13 in Scenario 2. There, we read:

Scenario 2: The Responsible Entity authorizes CSP personnel to have persistent access to BCSI. This could consist of access to BCSI in clear text or where the individual has access to the encrypted BCSI and the key(s) to unencrypt it. Compliance evidence examples could include but are not limited to:

a. Documented process for how CSP personnel provisioned access is authorized based on need, whether authorized directly by the Responsible Entity or indirectly by a contractual agreement with the CSP (my emphasis)

This says:

1.       If a NERC entity entrusts their BCSI to a third party service provider (it might be a cloud-based security service provider or a SaaS provider), they also must make it possible for the service provider’s employees to have “provisioned access” to BCSI. This means not just that the employee can see and manipulate the encrypted data, but that they will in some cases need either to have access to the decryption key at least briefly, or to see and manipulate the decrypted data before re-encrypting it. The fact is that a SaaS application can’t utilize encrypted data; the data must first be decrypted.

2.       CIP-004-7 Requirement 6 Part 6.1 (where the concept of provisioned access comes into play) requires that the Responsible Entity authorize “provisioned access to BCSI”. For the entity’s own employees, that is quite understandable; when a new employee needs provisioned access, they follow the entity’s standard procedures for requesting and receiving provisioned access.

3.       However, if the CSP has responsibility for some of the entity’s BCSI and they wish to authorize provisioned access for a new employee, strict compliance with R6.1 requires that they first contact the NERC entity whenever they need to do this, even if the situation can be considered an emergency. Some (or even most?) cloud-based security and SaaS providers will consider this requirement to be impossible to fulfill.

4.       The words “by a contractual agreement with the CSP” in the guidance document indicate that it should be acceptable (to the auditors) for NERC entities with medium and/or high impact systems to contractually delegate to the third party the authority to provision BCSI access for individual employees.

5.       There’s another gem hidden in these words: Not only will the NERC entity need to document that they have this contract term in place, but they will need to obtain from the SaaS provider documentation showing every instance in which they provisioned access to the entity’s BCSI for one of their employees. The entity will require both types of evidence for their next audit.

Why am I making such a point of this? It’s because, even though the seven words “by a contractual agreement with the CSP” seem to be almost a throwaway phrase in just one of many scenarios listed in the guidance document, the words turn out to be tremendously significant for CIP-004-7 compliance (in fact, they helped resolve what seemed in early December to be a showstopper issue for SaaS that utilizes BCSI, which I described in this post).

A NERC entity shouldn’t have to perform in-depth analysis like the above, just to learn what they need to require of their CSP or SaaS provider for compliance with CIP-004-7 Requirement 6 Part 6.1. Instead, they ought to have a white paper that lays these things out in black and white, or at least access to a webinar recording that describes them.

I’ll be honest: I find it disturbing that, more than six months after the compliance date for the two revised standards, CSPs and SaaS providers are probably not collecting the documentation required for NERC entities (with medium and/or high impact BCS) to prove compliance with Requirement CIP-004-7 R6 (as well as Requirement CIP-011-3 Part R1.2, which will also require documentation from the service provider). In other words, in theory those entities are out of compliance with one or more of the BCSI requirements.

Yet, at the same time, nobody seems very concerned about this. In fact, the only reason I know about it is because Kevin Perry, vice chair of the “CSO706” team that drafted CIP versions 2 and 3, as well as the Chief CIP Auditor for the SPP Regional Entity for eight years until he retired in 2018, told me about it. Being a friend of Kevin’s[i] shouldn’t be a requirement for knowing what’s required by the new BCSI requirements – both what’s required of the NERC entity and what the entity needs to require of their security service and/or SaaS provider(s). But that seems to be the case.

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Kevin has written a good paper on compliance with the new and revised BCSI requirements. If you would like to have it, email me and I’ll send it to you.

Thursday, June 27, 2024

More on “Breaking free of the NVD”


Andrey Lukashenkov of Vulners is one of the best vulnerability researchers around; he has been an active participant in the OWASP Vulnerability Database Working Group. He posted a very helpful (and very relevant) comment on my post from yesterday on LinkedIn (whenever I write a post in this blog, I post a link to it on LI). Here is what he said, with some amplification (enrichment? 😊) by me:

I wanted to show an example to illustrate the problem and show that even before the collapse, NVD-provided “enrichment” was sometimes misleading and subpar. Let's look at CVE-2023-32019 regarding MS Windows Server 2022:

Here is the CVE record in the NVD: https://nvd.nist.gov/vuln/detail/CVE-2023-32019#match-12900863.

Here is the CPE name for the software in question, which appears in the NVD’s CVE record: cpe:2.3:o:microsoft:windows_server_2022:-:*:*:*:*:*:*:*

If you’re experienced interpreting CPEs, you may recognize that the NIST (NVD) staff member who created the CPE name didn’t include a version string in it. Therefore, since this CPE name is listed as vulnerable to CVE-2023-32019, it (presumably unintentionally) asserts that every version of Windows Server 2022 is affected by that vulnerability.

This means that, if you query the NVD for any Windows Server 2022 version, including the one released yesterday, it will always appear to be affected by the same CVE, even though it’s very doubtful that it is.

The CVE Numbering Authority (or CNA. For a discussion of CNAs and what they do, see yesterday’s post) for Microsoft software is…get ready for it…Microsoft. Microsoft is one of the best CNAs out there, regarding the completeness of the information provided. Their CPE name is cpe:2.3:o:microsoft:windows_server_2022:10.0.20348.1787:*:*:*:*:*:*:*.

Microsoft’s CVE record on this vulnerability lists 11 Microsoft products, each of which has an affected version range. For Windows Server 2022, Microsoft lists two version ranges: “affected from 10.0.0 before 10.0.20348.1787” (which matches what’s in their CPE name) and “affected from 10.0.0 before 10.0.20348.1784”. Note these are almost identical ranges.

I don’t know why there are two ranges, but the point is these were presumably on the CVE report that the NIST/NVD person was looking at when they created the CPE name. Yet, the NVD CVE record that the person created doesn’t list any version range for Windows Server 2022.

That omission means every version of Windows Server 2022 appears to be vulnerable to CVE-2023-32019 – so a lot of software security professionals will waste a lot of time searching for this vulnerability on Windows Server 2022 versions that aren’t vulnerable at all. Moreover, it’s inevitable that a lot of end user organizations will reach out to Microsoft, demanding that their non-vulnerable version be patched – causing a lot of unnecessary headaches for help desk staff…all because an NVD staff member (or contractor) skipped a vital step.

With a closer look, NVD data turns out to be noisy. There isn't much value in a CPE like this. If you use it, the number of false positives will be staggering.

Andrey’s concluding advice is, “Wherever you get your CVE data, you want them with both the NVD and CNA pieces[i] combined and cross-checked.” No kidding. Thanks for this great example, Andrey!

Of course, I’m not trying to say in this post that the CNAs don’t make mistakes and the NVD always does. But the point is that the CNA is often the organization that developed the vulnerable software; they should be the authoritative source for information about that vulnerability. Yet, as I mentioned in yesterday’s post, the NVD resists providing information on how they create CPE names – so there’s no way for anyone else to learn from their mistakes. They clearly want to keep creating CPE names, since, frankly speaking, they don’t add a lot of other value to the data in the NVD (which is all originally created by the CNAs, who are part of CVE.org. The NVD staff often overrides the CNA’s CVSS score, even though the CNA should be in charge of that as well, for the same reason as they should be in charge of the CPE name).

Which reminds me of something that was said in this week’s OWASP Vulnerability Database Working Group meeting: After hearing a litany of the NVD’s failings, I asked if anyone could name one way in which the NVD is helping the situation. The Director of Product Security for a large software developer (who until recently was a big defender of the NVD) gave a one-word answer: “Obstruction”.

I was going to title yesterday’s post “Patience with the NVD is wearing thin.” Maybe I should have stuck with that title.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum, which works to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit. Email me to discuss that.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] By “CNA piece”, I assume Andrey’s referring to the CVE record in CVE.org, since that doesn’t yet have the “benefit” of being “enriched” by the NVD. It’s just displays what the CNA included in their CVE report.


Wednesday, June 26, 2024

Breaking free of the NVD


In April, I announced the new OWASP Vulnerability Database Working Group, part of the OWASP SBOM Forum. The group was formed to try to make sense of the many options available for vulnerability databases, especially since the seeming collapse of the National Vulnerability Database (NVD) made it imperative for all members of the software security community to learn what their options are (there have always been lots of options, but when the NVD was working reasonably well, many organizations were happy to put all their eggs in that basket. Unfortunately, the NVD is no longer working well, in case you didn’t know).

That group, which meets biweekly, has had some very interesting discussions (the meeting notes and chats are here). In the meeting this week, we discussed the problems caused by the fact that the NVD has stopped “enriching” CVE reports by adding CPE names. That discussion revealed a problem that I didn’t know existed. Since understanding the problem requires understanding how information gets into the NVD in the first place, I’ll start there.

The NVD is a database of software vulnerabilities, which are identified by a CVE number (e.g., CVE-2021-44228, the famous log4j – actually, log4shell – vulnerability). CVE numbers are maintained in a database operated by the MITRE Corporation under a contract with DHS. The database used to be called just “MITRE”, but now it’s officially known by its URL: cve.org. While MITRE personnel run cve.org day-to-day, they report to an independent board composed of representatives from private industry and government (including CISA and the NVD).

Like probably most people, I used to think that vulnerabilities were reported by independent researchers and white hat hackers directly to MITRE, and that the developer of the software is not usually involved in this process. However, that’s literally the opposite of the truth. In fact, almost all CVEs are reported by the supplier of the software itself in a CVE report.

A CVE report needs to be created by a “CVE Numbering Authority” (CNA), which assigns a CVE number to the vulnerability. In most cases, the CNA is a large software developer – Oracle, Red Hat, Microsoft, HPE, Schneider Electric, etc. Some CNAs just report vulnerabilities discovered in their own software. Others, like Red Hat and GitHub (a division of Microsoft), advertise that they will help other developers (within a certain scope, like “open source projects” or a particular industry or country) create CVE reports for vulnerabilities they’ve discovered in their products.

A developer that isn’t a CNA but wants to report a vulnerability in one of their products can contact a CNA that has them within their advertised scope. And if a developer can’t find a CNA that seems likely to be able to help them, they can contact MITRE itself, which is the “CNA of Last Resort” (CISA is the CNA of Last Resort for Industrial Control Systems and medical devices).

Of course, the CVE Report doesn’t just describe a vulnerability. It always needs to point to at least one product (software or an intelligent device) that is subject to the vulnerability. In at least 80% of cases, the product in the report was developed by the CNA that created the report.

There are two ways in which the product subject to the CVE can be referred to in the report. The default is always a textual description, e.g. “Cisco Crosswork Network Controller version 3.0.0” – and it’s safe to say that every CVE report includes such a textual description of the product. However, a user searching for vulnerabilities in a product they utilize will almost never be able to find the product in a vulnerability database like the NVD simply by searching on a text description; this is because there are many ways in which the product can be identified textually (to use the above example, that product might be described as “Cisco Crosswork Network Controller v3.0.0”, “Cisco, Inc. Crosswork Network Controller version 3.0.0”, “Cisco Crosswork Network Controller version 3.0”, etc. None of these would find a match if entered in the NVD).

This is why there should always be a machine-readable software identifier on the CVE report; a user that knows the identifier for a product can search for it in a vulnerability database like the NVD by entering that identifier. Currently, the only identifier supported by the NVD is the CPE name. If the user enters the correct CPE name for the product, the search result will either describe any vulnerabilities to which the product is subject or return a null result, which the user can trust to be an indication that no vulnerabilities have been reported for that product.[i] If they don’t know the correct CPE name for the product (and, unlike the purl identifier, the CPE name can’t be definitively predicted from information available to the user), they’re out of luck.

When the CNA creates the CVE report, they should include a CPE name for the product or products affected by the vulnerability. However, in the past the CNAs have often not done that. One reason for this is that the CNA may not feel comfortable creating the CPE name, because the specification isn’t easy to understand. Another reason is that the NVD, when they receive the CVE report from CVE.org, is supposed to “enrich” it with information that they provide; one of those pieces of information is the CPE name. In many cases, if a CNA included a CPE name in a CVE report, it was overwritten when the NVD enriched the report (this also has happened a lot with the CVSS score). The result was that, even when the CNA included a CPE name in the report, the CPE name in the NVD was the one that a NIST employee had created, not the CNA.

Of course, as long as a user can learn the CPE name in the NVD (perhaps through the vendor of the product), this isn’t a terrible situation. However, on February 12, 2024, the NVD abruptly reduced the number of CVE reports that they enriched to almost zero; while this has recovered to some degree, it’s still far below where it should be[ii].

Even that wouldn’t be a terrible problem if the CNAs simply created their own CPEs. CVE.org is pressing them to do that, and the five or six largest CNAs (which account for the vast majority of CVE reports) are doing this, at least for reports of vulnerabilities in their own products. The problem is that most of the CNAs aren’t including CPE names in their CVE reports. This makes the reports unusable in most widely-used applications, since they all require the ability to automatically find a product in the NVD; manual searches are close to useless.

We discussed this issue in the meeting of the OWASP Vulnerability Database Working Group this week. The Directors of Product Security of two of the largest software developers in the US (both large CNAs) were in the meeting, and both pointed to a big reason why many CNAs aren’t including CPE names in their reports: since the NIST people who enrich the CVE reports almost always must choose one of many different vendor names (Microsoft, Microsoft Inc., Microsoft Europe, etc.), product names (Microsoft Word, Microsoft Office Word, Word, etc.), and more, there is no way up front to know for certain what choices they’ll make. If the CNA enters the CPE name it believes is appropriate, the NVD staff may override that with their own CPE name (and this has happened a lot in the past).

These two large CNAs (and many other people, of course) would like to learn what rules the NVD staff members follow when they create CPE names, so they can make sure their staff members follow those same rules when they create CPE names for CVE reports. However, nobody has been able to get that information from the NVD (my guess is this is because the NVD doesn’t have rules to follow, but won’t admit that).

Unfortunately, there’s probably no near-term solution to this problem, except for CVE.org to provide training to the CNAs on how they should be creating CPE names, and hope the NVD doesn’t suddenly start creating their own CPE names again.

However, given that there’s no definitive way to identify values for the fields included in a CPE name (vendor name, product name, etc.), there will never be a real solution to this problem as long as CPE is the only option for naming software in the NVD. The ultimate solution to this problem is to take advantage of the fact that the new CVE version 5.1 specification (formerly the “CVE JSON specification”) includes the capability to utilize purl identifiers.

If a CNA adds a purl identifier to the CVE report (they probably have to include a CPE also), and if the vulnerability database supports purl (which isn’t the case now with the NVD and won’t be anytime soon, for sure. Of course, CVE.org should be supporting it now, although there probably aren’t any purls in that database now), a user will be able to find recent vulnerabilities for a product by searching on its purl. This should always be predictable, based on information the user should have already: the location from which they downloaded the package (e.g. Maven Central), the name of the package in that location, and the version string for the package in that location (the SBOM Forum white paper goes into a lot of depth in explaining why this is the case).

As I discussed in this post last year, purl is already used to identify software in almost every vulnerability database in the world which isn’t based on CPE (this means the NVD and the databases based on it). However, currently, purl can only be used to find open source software packages in vulnerability databases, not proprietary (“closed source”) products.

In the SBOM Forum’s paper, we described a scheme – based on a suggestion from Steve Springett, the leader of the OWASP CycloneDX and Dependency Track projects and also a purl maintainer – in which a software developer will create a SWID tag for each new product and version and make that tag available with the binaries. As we were writing the paper, Steve got a new purl type added (each download location, usually a package manager, has its own purl type), called SWID. If a user has the SWID tag for a product and wants to find about vulnerabilities in it, they will be able to create a purl using just 3 or 4 fields from the SWID tag (the SWID spec supports about 80 fields, but only a few of them are needed to create the purl).

If the CNA that created the CVE report (say it’s for CVE-2024-12345) included a purl in the report for one of their proprietary products, they presumably based it on the SWID tag (since they probably work for the developer that created both the product and its tag). Thus, the purl the user enters in their search (which was developed using the SWID tag they found on the developer’s website) should always match the purl associated with the CVE number. This means the user should always be able to find out that their product is vulnerable to CVE-2024-12345. This sort of certainty is never possible with CPE.

The big fly in the ointment currently is that what I’ve just described only applies to new software products or versions, not to existing or legacy ones. There needs to be some mechanism by which a user of a legacy product version can find a SWID tag for their version as well. The good news is that it shouldn’t be hard to create such a mechanism. For example, I’ve suggested that a software supplier could have a known location on their website called maybe “SWID.txt”. It would provide a list of products and versions, along with a SWID tag for each. A tool could search on the product and version and find the SWID tag; using that, the tool could create the purl for the product/version.[iii]

Of course, there would be other ways to make the SWID tag information available to users of legacy products and versions. In fact, there’s no reason why SWID tags even need to be used for this purpose. There just needs to be a way for the supplier to make information required to identify their products available to users of both current and legacy products. There are lots of ways this could be done.

I would love to see the OWASP Vulnerability Database working group address this task, but currently it’s beyond our means. If your organization might be interested in supporting this work through man (or woman) power or a donation to OWASP (or both), please drop me an email.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum, which works to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit. Email me to discuss that.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] Unfortunately, because of deficiencies in the NVD, a null result for a vulnerability search can mean many other things, such as that the user unknowingly fat-fingered the CPE name. This and other problems with CPE are described on pages 4-6 of the OWASP SBOM Forum’s 2022 white paper linked above. 

[ii] CISA has tried to help out with their Vulnrichment program, but that only addresses a small fraction of the non-enriched records. In addition, some CNAs report that there are problems with CISA’s work. 

[iii] In fact, this could be simplified if the supplier listed the purl along with the SWID tag, since there should always be a one-to-one correspondence between the two.

Wednesday, June 19, 2024

Are you confused about what’s going on with NERC CIP-003? If not, you should be…

 

As seems to happen frequently when new CIP standards are being developed or existing ones are being revised by different drafting teams, the situation with CIP-003 is very confusing now. The "Modifications to CIP-003" drafting team (2023-04) recently posted for comment the link to the first draft of the new CIP-003-11. However, immediately below it they displayed the link to the first draft of CIP-003-12. At first glance, it might seem there will be two versions of CIP-003 in effect soon. Will NERC entities be allowed to choose which one they want to comply with?

Meanwhile, the "CIP Modifications" drafting team has also posted CIP-003-12 (which that team drafted), although they're not soliciting comments on that now. And if you’re not satisfied with just two new versions competing for your attention, there are also versions CIP-003-9, CIP-003-10, CIP-003-Y, and CIP-003-A. All of these are available on one of the two drafting teams’ websites and are still somewhere in the approval process. Isn’t choice wonderful?

Last week, I tried to make sense of all this. Below is what I came up with. Note that I’ve divided my comments into two sections: one describing what the “Modifications to CIP-003” SDT is doing and the other describing what the “CIP Modifications” SDT is doing. Spoiler alert: There will never be more than one version of a NERC Reliability Standard in effect at the same time, although that doesn’t tell us which of these versions will “win out”, or even whether the ultimate winner won’t be a different version like CIP-003-13 or CIP-003-14.

Do you have your scorecards ready? Here we go…the battle of the new CIP-003 versions!

Modifications to CIP-003 Standards Drafting Team:

  1. CIP-003-9 was developed in response to questions that came up when CIP-013, the supply chain security standard, was developed starting in 2016. CIP-013 just applies to medium and high impact BES Cyber Systems. There was concern at the time (on the part of both NERC and Congressional staff members) that there needed to be some supply chain controls that applied to low impact BCS; a survey revealed that the most significant source of supply chain cybersecurity risk to low impact BCS was remote access by vendors. CIP-003-9 was drafted to address these concerns.
  2. In 2021, the NERC Low Impact Criteria Review Team recommended revisions to CIP-003 to require controls for low impact assets to "authenticate remote users, protect the authentication information in transit, and detect malicious communications assets containing low impact BES Cyber Systems with external routable connectivity." The team recommended that these changes be added to CIP-003-9, which was already in development.
  3. Before that could happen, in 2023 FERC approved CIP-003-9, with an implementation date of April 1, 2026. The FERC-approved standard includes what’s in CIP-003-8 (the current version, which came into effect in 2020) plus a new Section 6 in Attachment 1 (on page 23). That section requires "6.1 One or more method(s) for determining vendor electronic remote access; 6.2 One or more method(s) for disabling vendor electronic remote access; and 6.3 One or more method(s) for detecting known or suspected inbound and outbound malicious communications for vendor electronic remote access." 
  4. The new CIP-003-11 consists of CIP-003-9 with language added to Attachment 1. Part of that language is just the vendor remote access language that was in Section 6 of Attachment 1 in CIP-003-9. The drafting team decided to move that into Section 3, where the other Electronic Access Controls are addressed.
  5. The other part of the added language is what was developed to fulfill the 2021 recommendation in item 2 above; it was also added to Section 3. That addition reads, “3.1.3 Authenticate each user prior to permitting access to a network(s) containing low impact BES Cyber Systems, through which user-initiated electronic access applicable to Section 3.1 is subsequently permitted; 3.1.4 Protect user authentication information for user-initiated electronic access applicable to Section 3.1.3 while in transit between the Cyber Asset outside the asset containing low impact BES Cyber System(s) and the authentication system used to meet Section 3.1.3, or  the asset containing low impact BES Cyber System(s)”.
  6. The fact that the entire set of language in CIP-003-9 has been incorporated into CIP-003-11 (and also into CIP-003-12. See below), along with the new language recommended by the Low Impact Criteria Review Team in 2021, is a sure indication that CIP-009 will not be implemented, even though it has been approved by FERC. In other words, when FERC approves either CIP-003-11 or CIP-003-12, they will also announce that CIP-003-9 will not be implemented.[i]
  7. CIP-003-11 is just entering the comment period before its first ballot now. Most new or revised CIP standards have required at least 4 ballots before they've been approved by NERC; after that, a new standard goes to FERC, which can take 6-18 months to approve it. Finally, there's the implementation period, which will be 3 years (as it is in CIP-003-9). In other words, don't look for CIP-003-8 to be replaced for at least the next 5-6 years.[ii]
CIP Modifications SDT
Meanwhile, back at the ranch…
While all of this was going on, CIP-003-10 was drafted by the CIP Modifications Drafting Team, which is in the middle of the huge task of adding virtualization to all the CIP standards. The new CIP-003-12 is just a virtualized version of CIP-003-11. Since the virtualization standards (including CIP-003-12) won't come into effect until they all do, and since that day is probably still years away, I think most NERC entities should consider CIP-003-11 to be the next version of CIP-003 they’ll have to comply with. 

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] There was one previous case where a complete set of CIP standards, “CIP version 4”, was approved by FERC, yet one year later FERC approved CIP version 5 and said v4 wouldn’t take effect. I remember that incident quite well. In fact, less than a month before FERC announced they would approve v5, I participated in a webinar sponsored by EnergySec (which drew about 600 attendees) entitled “Get Ready for CIP Version 4!” I call this my “Dewey beats Truman moment”. 

[ii] Final approval of CIP-003-11 might also be delayed because before it comes into effect, the CIP changes required to enable full use of the cloud will be developed and ready to come into effect; so, a new version (CIP-003-13?) may be required, that incorporates those changes.

Saturday, June 15, 2024

Webinar on SBOMs and medical devices



I was honored recently to be asked to participate in a webinar being sponsored by Medcrypt, a company that has been an active participant in the OWASP SBOM Forum, which I lead.  Medcrypt is a proactive cybersecurity solutions provider for medical device manufacturers (MDMs). They offer a comprehensive suite of services designed to support MDMs in navigating the complex landscape of cybersecurity within the healthcare industry.

The signup page for the webinar is here. There is a good description of the webinar on the page. My role in the webinar will be mainly to discuss how end user organizations (mainly hospitals, of course) can learn about vulnerabilities in the software included in medical devices they operate, and some of the challenges they face in doing that.

I want to point out that my presentation will apply to all devices, not just medical devices. Of course, medical devices face a unique regulatory environment and therefore receive a lot of attention when it comes to cybersecurity. But the overall challenges with learning about device vulnerabilities apply across industries.

I hope you’ll be able to attend!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 


Thursday, June 13, 2024

Cloud CIP, Part 4: What can we learn from the CIP-013 experience?

Lew Folkerth of the RF Region and Chris Holmquest of SERC wrote a great article recently titled “The emerging risk of not using cloud services” (my emphasis); I linked to it in this post. It eloquently made the point that, far from the cloud being too risky for NERC entities to use, the risk of not using the cloud is currently greater – and growing all the time. This is because more and more software and service providers (especially security service providers) are announcing that in the coming years they will only offer a cloud-only version of their product, or at least they will no longer implement new features in their on-premises version. Both reliability and security will suffer as a result.

One sentence in the article stood out for me: “New Reliability Standards will be required, and those standards will need to be risk-based.” The article goes on to say, “There are too many variables in cloud environments to be able to write prescriptive standards for these cases.”

Of course, I completely agree with this statement, but how will this work in practice? Fortunately, these won’t be the first risk-based standards. The honor of being the first entirely risk-based standard goes to CIP-013, which came into effect in 2020. If any of you were reading this blog regularly at that time, you may remember I was a huge fan of the fact that CIP-013 is a risk based standard; I was sure it would be very successful and make everyone in NERC (both NERC entities and NERC auditors) love the idea of risk-based standards. But it didn’t quite work out like I’d hoped. Here’s my story:

I had been writing about CIP-013 from the beginning, when FERC issued Order 829 in July 2016. I participated in, and wrote about, the drafting team’s efforts to develop the standard. And I celebrated in 2018 when FERC approved CIP-013. Plus, I was the first person (that I know of) that advocated for pushing back the compliance date for CIP-013 when it became clear in March 2020 that Covid-19 was going to require the electric power industry - as well as almost every other industry - to drastically alter how they conducted their business.

Not coincidentally, I worked with some NERC entities to understand what CIP-013 compliance means. In the process of doing that, I developed what seemed to be a clear interpretation of what CIP-013 required (I wrote about this interpretation in a number of posts in 2017-2020). Here is my summary of the requirements of CIP-013 (it doesn’t matter whether you’re looking at version 1 or 2 of the standard; the only important difference between the two is greater applicability in v2):

1.      CIP-013-2 R1 requires the NERC entity to “…develop one or more documented supply chain cyber security risk management plan(s) for high and medium impact BES Cyber Systems…” R1 goes on to list, in R1.1 and 1.2, items that must be included in the plan.

2.      R1.1 requires that the plan “…identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services resulting from: (i) procuring and installing vendor equipment and software; and (ii) transitions from one vendor(s) to another vendor(s).”

3.      R1.2 lists six risks (or more specifically, mitigations for risks) that must be included in the plan. These six risks had been cited by FERC at different places within Order 829, as items that needed to be included in the plan. The Standards Drafting Team that developed CIP-013 gathered these all into one Requirement Part: R1.2.

4.      R2 requires the NERC entity to implement the plan they’ve developed in R1.

5.      R3 requires the NERC entity to review and update the plan every 15 months.

That’s it. The entire Standard (minus the Measures) fits on a single page. When it was developed, I marveled at the sheer simplicity of CIP-013.

Of course, the heart of CIP-013 is R1, since that describes the plan that’s required. I interpreted R1 (and still do) to mean that the NERC entity must do the following:

1. Identify supply chain cybersecurity risks to the Bulk Electric System resulting from the BES Cyber Systems the entity may procure. Where should the NERC entity look to find these risks? Of course, there are lots of lists, the NATF Criteria being one list that is especially relevant to the BES. But the entity doesn’t have to confine itself to a pre-conceived list. One way to identify risks is to read the news.

Here’s an example of that: Right after CIP-013 came into effect in 2020, the SolarWinds attack was discovered (when it was discovered, the attackers had been present in SolarWinds’ development network for 15 months. During that time, they carefully prepared and tested their malware before infecting seven releases of the Orion platform. They even started by introducing a proof of concept for their malware design, to see if it could infect the platform with a benign piece of code; that succeeded. I fully expect the attackers to publish a case study of this engineering triumph someday).

Surely, the risk that a software supplier has an insecure development network is a big risk. One mitigation for that risk would be requiring your software suppliers to fill out the Attestation Form that CISA recently released for compliance with Executive Order 14028.

2. Rate each risk as high or low, based on its likelihood and impact. In this case, estimating impact is easy: BES Cyber Systems are classified as such precisely because the impact of their loss, compromise, etc. is high. This means the impact of any supply chain attack on BCS will always be high. Therefore, the only real variable is likelihood. To rate each risk, the entity must simply ask, “Is the likelihood high or low that this risk will be realized?” If likelihood is high, risk is high; if it’s low, risk is low as well.

Fortunately, if you just divide likelihood into high and low levels, estimating it is easy. For example, someone may point out that, if even a small meteorite crashed into your relay supplier’s factory, the factory might be incinerated; that would of course mean your organization would need to find another supplier of relays. That’s a huge impact, but what’s the likelihood? Probably less than the likelihood of being struck by lightning on a sunny day. This is a low risk.

Once you’ve rated your risks as high or low, you then need to focus on the high ones; obviously, those are the only ones that need to be mitigated. But are you obligated to mitigate every high risk? No. An important principle of risk management is that no organization has unlimited resources available for risk mitigation. Your organization needs to decide which of the high risks you can afford to mitigate, and just focus on those. This means you should assess your vendors just based on the risks you’re trying to mitigate. In your questionnaires, you shouldn’t ask a vendor about a risk if you don’t care what their answer is. You’re just wasting your and the vendor’s time.

3. Once you have developed a list of risks you wish to mitigate, you need to add to that list the six risks in R1.2 (if you haven’t already identified them as important risks independently). You need to do this, not because these are the most significant supply chain cybersecurity risks to the BES (although they are all important risks), and certainly not because they’re the only supply chain risks to the BES. The SDT included those risks in R1.2 because FERC had mandated them at various disconnected places in Order 829. In other words, the SDT was saying, “We want you to identify risks you think are important and mitigate them. But, since FERC wants these six risks to be in your plan, you need to make sure you include them as well.”

Given my interpretation of CIP-013, how did I think it would be audited? It seemed quite logical to me:

a)      R1.1 would be audited based on how good a job the entity did of “identifying and assessing” risks. If they had made an honest effort to determine at least some of the most important supply chain cyber risks to the BES, that would be fine.

b)     R1.2 would be audited based on whether the entity included the six risks in R1.2.1 – R1.2.6 in its plan.

c)      R2 would be audited based on how well the entity implemented its plan – i.e., whether it took steps to mitigate all the risks it had said it would mitigate in the plan.

d)     R3 would be audited based on whether the entity had reviewed its plan every 15 months, and whether they had honestly taken steps to fix any problems or deficiencies they found in the plan.

During the runup to CIP-013 implementation in 2017-2020, I wrote a number of posts on what CIP-013 means, in which I elaborated on the above logic. Frankly, I thought that logic was so compelling that it would be widely adopted by NERC entities. After all, why would the CIP-013 drafting team tell NERC entities to develop a plan to “identify and assess” risks to the BES if they didn’t mean it?

But I was wrong. From what I’ve heard, there are few NERC entities that have interpreted CIP-013 to be about anything more than R1.2.1 – R1.2.6. And now, I wonder why I ever thought otherwise. After all, if NERC entities have learned anything from their 15 or so years of experience with NERC CIP compliance, it’s that they need to keep their “compliance footprint” as small as possible. That is, they need to keep their nose close to the grindstone and never stray beyond the strictest possible interpretation of the standards. To do anything more than what’s strictly required doesn’t win you any Brownie points; in fact, it might possibly leave you with a completely avoidable violation – an “own goal”, if you will.

However, I’m not blaming NERC entities for this situation. I’m also not blaming NERC, and certainly not the auditors. I’m blaming these two facts:

First, the standard, which I had admired for its pristine simplicity, was in retrospect too simple. Instead of simply telling NERC entities to “identify and assess” risks, R1 should have given them suggestions on how to do that within R1 itself. For example, R1.1 might have included a set of ten or so “areas of risk” that must be identified in the plan, e.g. “vendor remote access”, “software development process”, “secure shipment”, etc. The entity would be required to scrutinize each of these areas for risks that they should add to their plan. In some cases, they would be justified in ignoring one of those areas entirely; for example, if they don’t allow vendor remote access at all, they obviously don’t need to worry about securing their vendor remote access system.

Doing this would also have given the auditors something to hang their hat on when they audited the entity for CIP-013 compliance, other than simply determining whether the entity had done a good job of developing their plan. Instead, they could have verified that the entity examined each of the ten areas and made a conscious effort to determine whether there were important risks for them in each area. Since they didn’t do that, NERC entities focused entirely on the six items that were clearly required by CIP-013 R1: the six risks in R1.2.

Thus, the first lesson to be learned from the CIP-013 experience is that, in the world of prescriptive requirements like CIP-010 R1 (configuration management) and CIP-007 R2 (patch management), handing a blank slate to both the entity and the auditor and saying “You can figure this out for yourself” – which is unfortunately what the SDT did[i] - is asking for trouble.

Second and more importantly, NERC auditors in general (i.e. for all the standards, not just the CIP standards) aren’t trained to judge how well an entity has assessed and mitigated risks; they’re trained to determine if they did or didn’t do X. While I’m sure some of them, especially CIP auditors, understand risk very well (if for no other reason than that almost every other mandatory cybersecurity standard is based on risk), for many of them it’s a foreign concept. NERC needs to develop methods for auditing risk-based requirements, not just prescriptive ones, and then train the auditors on those methods.

Of course, fixing these two problems won’t be easy. But if NERC CIP is going to make a successful transition to the cloud, these two problems will need to be addressed.

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for the next few years, as well as beyond that? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Having participated in many of that SDT’s discussions, I know why they made this mistake: FERC had given NERC a strict deadline to develop and approve the new supply chain security standard. The SDT couldn’t afford to add any provisions to CIP-013 that might stir up controversy and result in extra ballots being necessary (although there were controversies anyway). In other words, FERC’s deadline backfired spectacularly. This points to a big problem with NERC’s standards development process, at least when it comes to cybersecurity: you can have a comprehensive standard that takes a long time to approve, or you can have a minimal standard that gets approved relatively quickly. But you can’t have both.

Tuesday, June 4, 2024

The road to cloud CIP, part 3: the EACMS problem


There are several key components of the problem of complying with the NERC CIP Reliability Standards by NERC entities with high and/or medium impact Bulk Electric System (BES) environments. By far the most important of those (and the most important for the upcoming NERC “Risk Management for Third-Party Cloud Services” Standards Drafting Team to address) is what I call the EACMS problem. This is so important because a) it inhibits many NERC entities from using existing cloud-based security monitoring services, and b) because more and more on-premises security services are moving exclusively to the cloud, thus becoming off-limits to many NERC entities.

Why is this the case? This is the problem in a nutshell:

1.      EACMS stands for Electronic Access Control or Monitoring System. That is, any system that monitors or controls access to a high or medium impact Electronic Security Perimeter (ESP) or high or medium impact BES Cyber Systems (BCS) is ipso facto an EACMS. I italicized “or”, because that’s very significant. The previous definition included “and”, meaning a system had to both monitor and control access, to be an EACMS. That definition applied to a much more limited set of systems.

2.      While there are lots of security monitoring systems that don’t “control” access to an ESP or BCS, there probably aren’t many that don’t monitor access. After all, knowing who’s knocking on your door to the internet, and especially knowing who made it through, is probably the most important consideration for security monitoring.

3.      If a cloud-based system monitors access to a medium or high impact ESP, it’s an EACMS. Therefore, it needs to be compliant with all the CIP Requirements and Requirement Parts that apply to medium or high impact EACMS (of course, the terms ESP and EACMS are only defined within medium or high impact environments).

4.      While some of the CIP Requirements that apply to EACMS (e.g. the requirements of CIP-008 and CIP-009) aren’t impossible to comply with in a cloud environment, there are others, especially the requirements in CIP-006, for which strict compliance is almost impossible in the cloud. For example, since the security service’s EACMS must be within a PSP controlled by the NERC entity, some auditors might interpret this to require that each NERC entity (with high or medium impact BCS) that utilizes the service must authorize entry for any CSP employee who has access to the systems on which the service is implemented. Unless the systems are somehow segregated with access controlled by the entity, that will probably mean every employee that is allowed to enter a data center that contains one or more systems that implement the security service will need to first be approved by the NERC entity. Of course, no CSP will ever allow this to happen.

As I mentioned in the previous post in this series, one way that a cloud-based security service provider could get around this problem would be to create a separate instance of their service just for NERC entities. They would then lock the servers on which that instance is implemented in a separate room, with access controlled by the NERC entity. Of course, this solution technically breaks the cloud model and raises the CSP’s costs considerably. However, to retain their NERC CIP customers, the CSP may decide to “eat” this cost temporarily, pending full resolution (through the standards drafting process) of the problems with CIP compliance in the cloud.

So, how can a security service provider who wants to move to an entirely cloud-based model remain “CIP compliant” after their move – i.e., how can they avoid making their CIP customers non-compliant with most of the CIP requirements? Other than the locked-room solution just described, I know of no way to do that today, other than to change the nature of the service they offer, so that it doesn’t monitor access to the ESP (or so that ESP access is still monitored locally). That is why I say this is the biggest problem facing NERC entities with medium or high impact BES environments that want to utilize cloud services today.

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for the next few years, as well as beyond that? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Saturday, June 1, 2024

We’re making progress on vulnerability database issues

Vulnerability database expert Brian Martin and I have been having a good back-and-forth discussion on LinkedIn about vulnerability database issues in general, including discussion of my proposal for a Global Vulnerability Database.

Today, Brian put up a new post that moves the discussion forward. His post includes 6 or 7 passages that point to what I think are common misconceptions that haven’t been well articulated previously. Because Brian has articulated them so clearly, I want to comment on each one of them. I’ll quote each of Brian's passages in red and then comment in black italics: 

While a Persistent Uniform Resource Locators (PURL) is one solution, it isn’t the only one used by vulnerability databases. So not only do you need to have an intelligent mapping from PURL to PURL, you also need it from CPE to PURL, and possibly other identifiers. It’s easy to have multiple valid PURLs all for the same piece of software.

BTW, purl in this context stands for “package URL”. Here is a good description of purl, posted by Philippe Ombredanne, the creator of purl.

Brian, when you say “mapping from purl to purl”, I think you’re talking about my earlier comment about comparing a CVE-purl connection in OSS Index with the same connection in CVE.org (once the CNAs start creating those). That’s a very special case, which I’d prefer to discuss with you offline.

However, “mapping CPE to purl” is literally impossible if there is more than one package manager for a particular OSS project. This is because most CPEs for open source software don’t refer to the package manager (except sometimes as part of the product name), meaning the user has no way of knowing which PM the vulnerability is found in.

Regarding the last sentence, “It’s easy to have multiple valid PURLs all for the same piece of software”, the problem is there’s no way to be certain that the code for a product named “log4core” in one package manager is bit-for-bit identical to the code for the “same” product in another package manager. Given that, the fact that CVE-12345 is found in one PM doesn’t allow you to conclude that it will be found in another PM.

This in one way is a limitation of purl, since you can’t make a statement that for example, CVE-12345 applies to all package managers that contain a product called “log4core”. You can only make that statement if you have tested log4core in all package managers. Purl will keep the CNA honest, meaning they will only list a purl in a CVE report if they have tested the product in that package manager – and a user should never assume a CVE in one PM will apply to another. In other words, CPE gives the user a false sense of comprehensiveness. 

Somewhere there are / were CPE specifications, likely before NVD took control of it. Early in the VulnDB days, we used them so we could generate our own CPE for products that didn’t appear in NVD. The fact that a seasoned vulnerability practitioner isn’t sure standards exist speaks volumes to how poorly NVD has managed CPE.

As unaccustomed as I am to defending NVD, I need to do so now. There’s simply no way there can be a unique CPE for any product – i.e., one that any user will always be able to create accurately. Pages 7-9 of the OWASP SBOM Forum’s 2022 document on the naming problem differentiate extrinsic identifiers like CPE from intrinsic identifiers like purl.

Briefly, an extrinsic identifier requires the user to do a lookup to at least one external database, before they can be sure they have the correct identifier. In the case of CPE, that database is the CPE Dictionary. On the other hand, an intrinsic identifier like purl just requires the user to enter information they already know with certainty: the package manager from which they downloaded the software, the product name in that package manager, and the version string in that package manager.

The reason that CPE is ultimately unworkable is the fact that creating a CPE name usually requires making arbitrary choices (e.g., “version 1.2” vs. “v1.2”), rather than only requiring information that can always be exactly verified by a user, Nobody can know for sure what choice was made by the person that created the CPE without doing a search of the CPE dictionary, and perhaps multiple searches using fuzzy logic or something like that.

(quoting Tom) “As long as you know the package manager (or source repository) that you downloaded an open source component from, as well as the name and version string in that package manager, you can create a purl that will always let you locate the exact component in a vulnerability database. This is why purl has literally won the battle to be the number one software identifier in vulnerability databases worldwide, and literally the only alternative to CPE.”

Unless… you end up having half a dozen PURLs for the same package, because it is available on a vendor’s page, GitHub, GitLab, Gitee, and every package manager out there.

And this is exactly the point about using purl in a vulnerability database: It only tells you what the CNA that created the CVE report with purl knows: the package manager, product name and version string of the software in which they found the vulnerability. The user can’t draw any conclusion about a product with the same name and version string in any other PM, unless the CNA that produced the report added purls for them as well (meaning they tested the same product and version in each PM). 

Who will maintain this epic list of PURLs? As of this blog, there are only 379 CNAs with tens of thousands of software companies out there. Not to mention the over one hundred million repositories on GitHub alone. While a PURL may be an open standard where CPE is not, it forces the community to set a PURL for every instance of the location of that software. That sounds like the big database you don’t think is viable?

Again, that’s the point of purl: no list is required. Any user can create the correct purl just from the three pieces of information they already know. As Steve Springett often says, every open source product in a package manager already has a purl – there’s no need to create it. 

(quoting Tom) “However, there is one big fly in the purl ointment: It currently doesn’t support proprietary (or “closed source”) software.”

And the other shoe drops. =) So, this is not a critique by any means, just highlighting the problems the community faces. The problems we faced 10 years have just compounded and here we are. Not that there were realistic solutions to all of these problems back then, and even if there were, we certainly didn’t address them then.

That’s correct. Currently, purl only covers open source software, although Steve Springett (who worked with Philippe to create purl, as mentioned in Philippe’s post that I linked above) points out that any online software “store” (Google Play, the Apple Store, etc.) could easily be made into a purl type, since the store controls the namespace of the proprietary products that are for sale in the store (just like a package manager controls the namespace of the packages in the PM).

In other words, what is needed is a controlled namespace, so one product will always have one name. Steve also suggested that SWID tags could be a more general way to identify proprietary software. He wrote the purl PR for a new identifier called SWID – which was adopted in 2022. See below. 

(quoting Tom) “I think this is a solvable problem, but it will depend – as a lot of worthwhile practices do – on a lot of people taking a little time every day to solve a problem for everybody. In this case, software suppliers will need to create a SWID tag for every product and version that they produce or that they still support. They might put all of these in a file called SWID.txt at a well-known location on their web site. An API in a user tool, when prompted with the name and version number of the product (which the user presumably has), would go to the site and download the SWID tag – then create the purl based on the contents (there are only about four fields needed for the purl, not the 80 or so in the original SWID spec).”

Unfortunately, I think at this point, this is a pipe dream. I am quite literally discovering new, well-known “standards” only by seeing them as requests ending in a 404 response in my web logs. So any such solution based on well-known I think isn’t viable now, and likely won’t be moving forward.

Please read what the OWASP SBOM Forum proposed regarding SWID on pages 11 and 12 of our 2022 paper. The point is that there needs to be some unique user-discoverable source of information on the product. Otherwise, the only alternative is to create (and maintain) a hugely expensive database of all proprietary software, along with the different product names and vendor names it was associated with through its lifetime – and that requires a huge number of very subjective judgments.

For example, if Product A from Vendor X is sold to Vendor Y who renames it Product B, is it the same product or not? If B is very different from A, you would just say it’s different. But if B is literally just A with a different name, you’d say it’s the same. Where do you draw the line between these two cases? There’s simply no way to do so.

There are certainly other ways that information on proprietary software could be made user-discoverable, so that no big secondary database (probably much larger than the vulnerability database itself) is required. One way is Steve Springett’s Common Lifecycle Enumeration project. That will take much longer to put in place than our SWID proposal, but IMO is ultimately the correct thing to do. If you have other ideas, we’d love to hear them. 

(Tom here) Of course, all of the above discussions are examples of the Naming Problem. There’s no question that this problem will be with us for a long time and will never be “solved” in any final way. However, the good news about the Global Vulnerability Database idea is that the naming problem doesn’t need to be solved first, precisely because the GVD won’t require “harmonization” of software names. 

The software will be named what it’s named in the vulnerability databases to which queries are routed; it will be up to the individual databases to continue their (presumably ongoing) efforts to improve their naming. If there's reason to believe there are serious naming problems in one vuln DB, the GVD might suspend routing queries to it. The GVD will be no more accurate than the individual DBs, but it won’t be less accurate, either. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. Also, if you would like to learn more about or join the OWASP SBOM Forum, please email me.

My book "Introduction to SBOM and VEX" is now available in paperback and Kindle versions! For background on the book and the link to order it, see this post.