Friday, September 27, 2024

Can you implement a low impact Control Center in the cloud today and maintain NERC CIP compliance?

In this recent post, I explained why I think low impact BES[i] Cyber Systems (specifically, those LIBCS that are found in low impact Control Centers) can be deployed in the cloud without any CIP compliance implications today. My reason for saying this is I don’t think low impact BCS are even defined in the cloud, because of wording in CIP-002-5.1a. In fact, I think there would not be a CIP compliance obligation if a NERC entity deployed their entire low impact Control Center in the cloud.

However, I got pushback from a former NERC CIP auditor on that post. He thinks my readers who put LIBCS in the cloud, and especially a low impact Control Center in the cloud, will be at compliance risk if they don’t comply with the low impact CIP requirements. He based his statement on the NERC definition of Control Center.

While I didn’t agree with my friend’s reasoning in this matter, I realized I may be making a false analogy between medium/high impact requirements and low impact requirements. I don’t see any way that medium or high impact BCS (let alone Control Centers) can be implemented in the cloud today without a) putting the NERC entity in violation of many CIP requirements, or b) requiring the CSP to break the cloud model by enclosing all of the systems that hold the entity’s BCS within a Physical Security Perimeter and an Electronic Security Perimeter.

My mistake was making the same assumption about the CIP requirements that apply to low impact BCS (LIBCS). It took a long exchange of emails, but my friend – Kevin Perry, retired Chief CIP Auditor of the NERC SPP Regional Entity – finally convinced me that it’s in fact possible for a NERC entity to deploy a low impact Control Center (LICC) in the cloud, and also comply with the same set of CIP requirements that an on premises LICC would need to comply with; more specifically, it will be possible for the CSP to provide the NERC entity the evidence they need to prove compliance with each low impact requirement. Here’s why I now think this is possible:

1.      All the CIP requirements that apply to LIBCS are found in CIP-003-8. Three of them are simple to understand; the CSP will have no problem providing compliance evidence for them:

a.      Requirement R1 mandates that the entity develop security policies.

b.      Requirement R3 requires designation of a CIP Senior Manager.

c.      Requirement R4 prescribes development of a policy for the CIP Senior Manager to delegate their authority in certain circumstances.

2.      However, one of the requirements isn’t as simple to understand. It’s Requirement R2, which states that a Responsible Entity with LIBCS needs to develop and document a plan “that include(s) the sections in Attachment 1.”

3.      Attachment 1 appears on pages 23-25 of CIP-003-8. It consists of five Sections. Despite the terminology that’s used, I believe it’s better to think of each Section as a Requirement Part of CIP-003-8 Requirement R2.

4.      There are four Sections that prescribe policies or practices that the Responsible entity needs to have in place: “Cyber Security Awareness” (Section 1), “Physical Security Controls” (Section 2), “Cyber Security Incident Response” (Section 4), and “Transient Cyber Asset[ii] and Removable Media[iii] Malicious Code Risk Mitigation” (Section 5).

5.      All four of the above are functionally equivalent to policies and practices that the CSP is certain to have in place now, perhaps in the wording of standards the CSP has been certified on, such as ISO 27001[iv]. It will be up to the NERC entity to demonstrate that the CSP’s policies and practices address each of the above requirements, based on evidence provided by the CSP.[v]

CIP-003-8 Requirement 2 Attachment 1 Section 3 is different from all the above requirements in that it’s a technical requirement. A few of the technical requirements that apply to medium and high impact systems, including CIP-007 R2, CIP-010 R1 and CIP-005 R1, are literally impossible for a CSP to comply with, since they require tracking the actual devices (physical and virtual) on which BES Cyber Systems reside – and then providing evidence showing that the literal wording of the requirement was applied to every device on which the BCS resided over the 3-year audit period. Since systems in the cloud move from device to device and datacenter to datacenter all the time, this is impossible.

However, CIP-003-8 Requirement 2 Attachment 1 Section 3 doesn’t apply to individual devices. This is because requiring an inventory of low impact BCS is strictly prohibited by wording in CIP-002 and CIP-003. Section 3 requires the Responsible Entity to inter alia implement electronic access controls that “Permit only necessary inbound and outbound electronic access… for any communications that are... between a low impact BES Cyber System(s) and a Cyber Asset(s) outside the asset containing low impact BES Cyber System(s)…”

Although I’m not an auditor, it seems to me that the CSP will need to provide evidence like the following:

·        A list of all routable protocols used to communicate between any low impact BCS in the Control Center (without identifying which BCS use which protocols) and any system outside of the CSP’s cloud (without identifying those systems, where they’re located or who they belong to).[vi]

·        Documentation showing how the Responsible Entity manages the virtual firewalls to ensure that only necessary inbound and outbound electronic access is permitted (the NERC entity will probably be responsible for this evidence, assuming they are allowed to manage the firewalls).

This evidence seems to me to be very much in the realm of possibility. Thus, it seems it’s possible to implement a low impact Control Center in the cloud and be fully CIP compliant at the same time.

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] BES stands for Bulk Electric System.

[ii] Transient Cyber Assets are usually laptops of third parties that are used for less than thirty days on the NERC entity’s network.

[iii] These include thumb drives, removable optical drives, etc.

[iv] If you’re wondering why I don’t also mention FedRAMP, I need to point out that the CSP isn’t “certified” as “FedRAMP-compliant”. FedRAMP is a means of approving a federal agency’s use of a third-party service, such as a cloud service. It isn’t meant to provide an overall certification of the service’s security for the private sector, as are ISO 27001 and other certifications.

[v] Since the CSP won’t be able to provide evidence of compliance with the exact wording of any of these requirements, it’s likely that some other NERC ERO document, like a CMEP Practice Guide, may need to be in place to “legalize” all of this. The Regions (including the auditors) need to draft and approve CMEP Practice Guides; that might take 6-9 months. But that’s still a lot better than the approximately six years that I estimate will be required before changes to the CIP standards to “legalize” full cloud use will be in effect (the Standards Drafting Team started that process in August).

[vi] I admit it’s stretching a point to say that “outside of the CSP’s cloud” is the equivalent of “outside the asset containing low impact BES Cyber System(s)”. The latter is really longhand for “low impact asset”. That means one of the six BES asset types listed in CIP-002-5.1a R1, which wasn’t already classified as high or medium impact in CIP-002-5.1a R1 Attachment 1. Of course, the cloud, or even a cloud data center, isn’t one of those six asset types, although I think it could well be considered to be such.

My guess is there will need to be a CMEP Practice Guide to clarify this point. Again, waiting the 6-9 months required to draft the Practice Guide is a lot better than waiting six years for a full rewrite of the CIP standards (although, since I’ve already advocated for two Practice Guides in this blog post - and others, including Guides regarding BCSI and the meaning of “access monitoring” in the EACMS definition, will be required as well - there probably needs to be some organized effort to draft and approve the Guides, with multiple approvals in process at the same time).

Tuesday, September 24, 2024

One of the best reasons to move to purl: version ranges

The OWASP SBOM Forum wants to see purl become the primary software identifier for databases based on CVE. This includes the National Vulnerability Database (NVD), but also any other database that uses CVE as the vulnerability identifier. One big driver for this is the fact that the current serious problems with the NVD have essentially made completely automated vulnerability management impossible, unless the user’s vulnerability management tool looks up vulnerabilities in an open source vulnerability database like OSV or OSS Index.

Why are those two databases doing so well, while the NVD is almost on life support? Because OSV and OSS Index (as well as almost any other vulnerability database that focuses on open source software products) are based on the purl software identifier, while the NVD, and databases based on it, utilize only the CPE identifier (OSV has its own software identifier, but supports purl as an optional identifier). NVD staff members used to add a CPE identifier to every CVE report; however, starting on February 12, the NVD drastically reduced the number of CVE reports that it “enriches” with CPE names. There is currently a backlog of above 18,000 “unenriched” CVE reports, meaning that a user searching for recent vulnerabilities in a software product they use is unlikely to learn about most CVEs that have been identified since February.

Purl (which stands for “package URL”) offers a number of advantages over CPE. Some of them are described in this post, as well as in the OWASP SBOM Forum’s 2022 white paper titled “A proposal to operationalize vulnerability management”. However, there’s one capability that’s in principle offered by purl (and not CPE), but which is not in wide use now – it’s the capability to specify a version range. It has yet to be implemented in purl, even though it was developed by the purl team.

Why are version ranges so important in vulnerability management? It is because a software vulnerability seldom appears in one version of a product, then disappears in the next version. Instead, it usually remains in the product for multiple versions (or even multiple years), until it is finally patched or removed for other reasons. This means that vulnerability management is likely to work much better if it can be based on version ranges, not individual versions.

Today, CVE reports often state that a vulnerability applies to a range of versions in a software product, not just to one version or multiple individual versions. However, that assertion is strictly in text; there is no way to specify a version range in a machine readable format, since CPE (the only software identifier currently available for CVE reports) does not support version ranges. This means that, if a user performs an automated search on the NVD, they will not learn that a CVE affects a range of versions, unless they also read the text of the CVE report.

Ideally, a software identifier should be able to specify a version range in a machine-readable format that states, for example, “Versions 3.4 through 5.6 of product XYZ are affected by CVE-2024-12345. Other versions are not affected.” A user tool that parses that identifier will identify every version that falls in that range and mark each one as affected by the vulnerability. That is, the user tool will understand the above statement to mean, “Versions 3.4, 3.5, 3.6…5.5, and 5.6 are affected by CVE-2024-12345.”

Currently, if a user notices a version range in a CVE report that applies to a product they utilize, and they want to make sure every version within the range is noted to be affected in their vulnerability tracking system, they will have to annotate each version manually. Since a multiyear version range could easily contain hundreds of versions (including minor versions, patch versions, separate builds, etc.), this could turn into a very time-consuming process.

Recognizing this problem, a few years ago, the purl community developed a “version range specifier” called “vers”. It provides a simple specification for version ranges. For example, a range that includes version 1.2.3, as well as versions greater than or equal to version 2.0.0 and less than version 5.0.0, would be specified as “1.2.3|>=2.0.0|<5.0.0”.

The simplicity of vers comes at a cost: It only applies to certain versioning schemes in which the elements of a range can be specified using simple arithmetic. For example, if I have version 3.2.5 of the product to which the above range applies, I can easily determine that my version falls within that range, whereas version 5.4.6 falls outside of the range. On the other hand, a versioning scheme that uses letters is not supported by vers, since there is no way to be certain whether “version 4.6B” falls within the above range or not. The vers specification lists versioning schemes that are supported. However, it is possible that a scheme that isn’t on the list, but in which versions can be compared using just addition, subtraction, and greater than/less than operators would work fine with vers.

One of the main reasons why CVE.org should move to purl as the primary software identifier (from the current CPE) in the future is that when vers is integrated into purl (which I admit won’t be an easy lift), a purl in the database could identify an entire version range, not just a single version of the product. When this happens, it will constitute a huge improvement in automated vulnerability management.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum and its Vulnerability Database/purl Working Group. These groups work to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit, which are passed through to the SBOM Forum. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

Sunday, September 22, 2024

FERC will approve NERC CIP-015-1

At their monthly “Sunshine Meeting” on September 19, the Federal Energy Regulatory Commission (FERC) announced a Notice of Proposed Rulemaking (NOPR) that says they intend to approve CIP-015-1, the new NERC Reliability Standard for internal network security monitoring (INSM); they also announced there will be a two-month comment period before they are ready to issue their order approving the standard.

E&E News described this, and two other important FERC actions taken during the Sunshine Meeting, in this article (which is behind a paywall). The article quotes me regarding the NOPR:

The new standard “is needed because there's nothing in the [FERC] requirements now that deals with monitoring an internal network to catch intruders. It's all about preventing bad guys from penetrating the network in the first place,” said grid security consultant Tom Alrich.

“However, it's become painfully clear that nobody can count on keeping the bad guys out forever. Once in, they need to be detected as soon as possible, so they can be removed or at least prevented from causing damage,” Alrich added.

FERC’s NOPR wasn’t a surprise. In Order 887 of January 19, 2023, FERC ordered that NERC develop “requirements within the Critical Infrastructure Protection (CIP) Reliability Standards for internal network security monitoring (INSM) of all high impact BES Cyber Systems and medium impact BES Cyber Systems with External Routable Connectivity (ERC).”

The NERC Standards Drafting Team (SDT) that addressed Order 887 followed FERC’s instructions closely; the result was that FERC approved CIP-015-1 a little less than 4 months after final approval by the NERC Board of Trustees and submission to FERC. This is lightning fast in the NERC/FERC world.

What also wasn’t a surprise – since FERC does this very often when they approve a new or revised NERC CIP standard – was that FERC proposed to require that NERC add something to CIP-015-1. Specifically, they suggested they will direct NERC to expand the scope of CIP-015 to include high and medium impact Electronic Access Control or Monitoring Systems (EACMS) and Physical Access Control Systems (PACS); the standard submitted to FERC includes only high and medium impact BES Cyber Systems (BCS) - which were all that FERC asked for in Order 887.

As in all cases where FERC has done this, the amendment will not be made to the standard that FERC proposes to approve, namely CIP-015-1. Instead, CIP-015-1 will come into effect as it stands now, once FERC issues their order after the comment period ends. Then another Standards Drafting Team (which could be the same one that developed CIP-015-1) will draft and seek approval for version 2 of CIP-015, numbered CIP-015-2. FERC’s rationale for ordering this change is interesting. It is discussed on pages 14-20 of the NOPR.

There is another interesting aspect of this development, which is nowhere referenced in the NOPR (and since it’s not legally linked with the subject of the NOPR, I would have been surprised if FERC had mentioned it): It is very likely that many (or even most) services offered for INSM will be based in the cloud. And since they will probably provide what a CIP auditor might consider to be “access monitoring”, they may be judged to fall under the EACMS definition: “Cyber Assets that perform electronic access control or electronic access monitoring of the Electronic Security Perimeter(s) or BES Cyber Systems. This includes Intermediate Devices.”

Given that INSM services (or more specifically, the cloud-based software that implements the services) may be considered EACMS, they would need to comply with the large number of current NERC CIP Requirements and Requirements Parts that list EACMS in their scope. As such, they would run into exactly the same problem that other medium and high impact BES Cyber Systems, EACMS and PACS run into, when it comes to the question of implementing them in the cloud: Many of the CIP requirements that the provider would need to comply with would be close to impossible for any cloud service provider (CSP) to implement, unless they were willing to break their cloud business model – for example, by locking the physical assets containing a NERC entity’s BCS, EACMS and PACS in a single room, with access controlled by the entity (in order to comply with the requirements of CIP-006-6). Few if any CSPs will be willing to do this.

Ironically, this means that, if no other changes are made to the CIP standards (or perhaps to related documents like CMEP Practice Guides), NERC entities who wish to comply with CIP-015-1 once the three-year implementation period[i] finishes will have fewer compliance tools available to them than organizations not subject to NERC CIP compliance, since they might not be able to use cloud-based INSM services. This may result in higher costs, reduced functionality or both.

It might seem unlikely to you that the cloud/CIP problem, which is now under consideration by a new SDT, won’t be solved 3 ½ years from now - in other words, that new or revised CIP standards approved by NERC and FERC will be in effect. However, I think it’s quite unlikely that those standards will be in place that soon. On the other hand, maybe the fact that CIP-015-1 compliance will be mandatory 3 ½ years from now will help move the process along.

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] The implementation period for a new or revised NERC standard always starts soon after FERC approves the standard, specifically, after the order is published in the Federal Record.

Thursday, September 19, 2024

NERC CIP in the cloud: Not so fast!

Until January 1, 2024, NERC entities with high and medium impact BES Cyber Systems were effectively “forbidden” to use software-as-a-service (SaaS) applications, if they required access to BES Cyber System Information (BCSI). This wasn’t because of an explicit prohibition in the CIP standards, but rather primarily because of the use of two words, “storage locations”, in previous versions of CIP-004. This problem was (theoretically) corrected when revised versions of two standards came into effect on January 1: CIP-004-7 and CIP-011-3.

The revisions (especially the addition of a new requirement CIP-004-7 R6) officially fixed the problem, yet it seems that NERC entities didn’t get this message. Other than one popular SaaS application for configuration management (which was already being widely used by NERC entities in their OT environments for at least the last six years), it is safe to say there has been close to zero additional SaaS use due to the two revised requirements coming into effect.

The primary reason for this result seems clear: Neither NERC nor the Regional Entities have made available clear guidance on how both the NERC entity and the SaaS provider can provide evidence of the entity’s compliance with the new or revised requirements. This is especially true for CIP-004-7 Requirement R6 Part 6.1, which applies to BCSI utilized by the SaaS application. Today, neither NERC entities nor SaaS providers have received guidance (or official guidelines) on how they can show they have complied with the strict wording of Part 6.1.

Part 6.1 appears to require the SaaS provider to request permission from the NERC entity for any individual to decrypt BCSI, so it can be available for processing by the SaaS application (this is needed, since most SaaS applications can’t process encrypted data). Few if any SaaS providers would be willing to do that, considering a) they would need to request permission from each NERC entity individually, and b) the permission would have to be for a particular individual (meaning it can’t apply to all individuals that fulfill a particular role or a similar consideration).

These concerns seem to be overblown. They can probably be addressed if each NERC entity signs a “delegation agreement” with the SaaS provider. The agreement will delegate to the provider the authority to authorize individual staff members for “provisioned access” to the entity’s BCSI, as long as each staff member meets whatever criteria the entity has set in its CIP-011-3 R1 Information Protection Plan (IPP). This seems to be hinted at by a statement on page 13 of the document endorsed by NERC in December as Implementation Guidance for the two revised CIP standards.

However, clearly just a hint on one page of an 18-page document isn’t enough for most NERC entities; it was wishful thinking to believe that this alone would persuade them to put aside whatever doubts they had and plunge wholeheartedly into using SaaS applications that require BCSI access. It will require some NERC document that clearly addresses the problem, like a CMEP Practice Guide.

Moreover, it’s safe to assume that, pending final approval and implementation (within probably 5-6 years) of whatever new or revised CIP standards are developed by the new NERC “cloud CIP” Standards Drafting Team (SDT), any other clarifications that are needed on particular areas of cloud use will require a separate document, such as a CMEP Practice Guide. This includes the question whether it’s fully “legal” to implement a low impact Control Center in the cloud; I said so in a recent post, but I got pushback from a respected former CIP auditor on my reasoning. As long as reasonable people may differ in their interpretations, it’s unlikely that many NERC entities will be willing to be the first kids on their block that venture into any area of cloud use that has previously been considered to be “off limits” to NERC entities.

This experience should teach the CIP community a good lesson: Even though some of us were thinking that NERC entities would rush to utilize the cloud whenever the door was even partially cracked open (as in the case of BCSI). However, it’s clear that NERC entities aren’t going to rush into the cloud until they’re sure they’re not running significant cybersecurity or CIP compliance risks. They’re going to require significant guidance and handholding.

Of course, there’s nothing wrong with that. If someone is a wild risk-taker, they shouldn’t be in the electric utility business, where the risks can easily involve human life.

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, September 18, 2024

The NVD's been down so long, it's beginning to look like up


Bruce Lowenthal, Senior Director of Product Security at Oracle, has been regularly updating the OWASP SBOM Forum members on what’s going on with the National Vulnerability Database (NVD); my last post on this topic was this one. His latest update was on Sunday. Here are the highlights (which include additional points he made in emails with me yesterday):

·        The total number of “unenriched” CVE reports (i.e., reports to which the NVD has not assigned a CPE name. That means a search on a CPE name will not reveal the vulnerability, even if the product that the CPE applies to is named in the text field of the report. Only a “manual” text search would reveal the product) is now 18,790. These are unenriched CVE reports incorporated into the NVD starting in February (when the NVD suddenly drastically reduced the number of CVE reports they enriched) and ending in mid-September.

·        This number is similar to what Andrey Lukashenkov estimated in the August post linked above, which was “over 18,000”. You might call this good news, since the rate of increase in the backlog seems to be somewhat diminishing. But Bruce’s monthly numbers don’t show a consistent trend.

·        However, the NVD is consistent in one thing: Despite the fact that they built up a backlog of 10,557 unenriched CVE reports from February through May (yielding an enrichment rate of less than 3%), they no longer consider that to be part of their backlog, since they aren’t even trying to enrich any CVEs issued before June. Starting in June, they have been enriching an average of less than 50% of CVEs every month, but they haven’t enriched a single CVE for the February – May period since May.

·        The last announcement they made about their problems was on May 29, when they said “a contract for additional processing support” would allow them to return to their pre-February processing rate “within the next few months”. It’s now almost four months later, and they’re still very far from reaching their pre-February rate, which was close to 100%.

·        On May 29, they also announced, “In addition, a backlog of unprocessed CVEs has developed since February…We anticipate that that this backlog will be cleared by the end of the fiscal year.” Of course, they’re referring to the end of the federal fiscal year, which is 12 days from today. Somehow, I doubt they’ll clear up their entire 18,790 unenriched CVEs by the end of the calendar year, let alone the end of the fiscal year. Bruce’s numbers showed that the backlog continues to grow at over 1,000 per month. Given that growth rate, I calculate that the NVD will erase their backlog…envelope, please…never.

The NVD has also been consistent in another area: They still have not given an honest explanation for why their processing capability fell off a cliff on February 12. However, this is now clear: It seems almost all their CVE processing activities were performed by contractors. Another federal agency was providing them about $4 million per year to pay for those contractors; they suddenly withdrew that funding in February.

I would love to learn why that other agency withdrew their funding, especially in the middle of the fiscal year. But the bigger question is what the fact that the NVD seems to be relying almost entirely on contractors to enrich CVEs means for the quality of that work. Fortunately or unfortunately, we already know the answer: the quality isn’t good.

While CPE was the first machine-readable software identifier to make the big time more than two decades ago, its weaknesses have become more apparent in recent years, especially because the purl identifier has been so successful in the open source software world. Even then, the fact that the NVD was putting a CPE on almost all CVE reports made a gradual solution to the problem – i.e., gradually switching to purl as the primary software identifier in CVE reports – seem quite acceptable.

But today, we’re living with about 19,000 CVEs in the NVD (and other vulnerability databases that are based on the NVD) that don’t have a CPE, and this number is growing by over 1,000 a month. Moreover, almost all of these are recent CVEs, which makes the fact that they’re invisible to searches even more galling. It’s like your doctor stopped learning about new diseases in 2019 and hasn’t informed you of that fact. When you go to him with symptoms of Covid, he has no idea what your problem might be.

Automated vulnerability management (the only kind that makes sense for any organization other than a small one) is no longer possible for any organization if they are tied to a vulnerability database that relies exclusively on CPE. But, since by far the most widely used vulnerability identifier is CVE, and a CVE report can now only use CPE name as a software identifier, this means that most users of vulnerability databases are limited to using CPE[i]. Given the NVD’s huge backlog of unenriched CVEs, this also means that users of CVE-based databases are described by another 3-letter acronym: SOL.

Fortunately, there is another software identifier that has now almost literally taken over the open source software world: purl. In fact, there is hardly any open source vulnerability database that isn’t based on purl. But purl currently has an important shortcoming: it doesn’t support proprietary software products, just open source ones.

This means that, even when CNAs start including purl identifiers in CVE reports, they won’t be able to do so for proprietary software. Since the biggest CNAs are all proprietary software developers (Oracle, Microsoft, etc.), and since most of their CVE reports address vulnerabilities in their own products, this means that today most CVE reports don’t contain a machine-readable software identifier (because the NVD has usually discouraged the CNAs from creating their own CPE names and has often substituted their own for the ones the CNAs created).

To make a long story short (or shorter, at least), the big inhibitor to replacing purl with CPE in the NVD and other CVE-based vulnerability databases is the fact that there’s currently no clear way to create a usable purl for proprietary software, even though in principle there should be no problem with doing that (although it won't be easy). On page 12 of our white paper on software naming published more than two years ago, the OWASP SBOM Forum described, at a very high level, one method of creating purls for proprietary software; since then we have identified another method as well.

We will shortly come out with a new white paper on how the SBOM Forum proposes to make it possible for CNAs to create machine-readable purl identifiers for proprietary software products identified in CVE reports.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum and its Vulnerability Database Working Group. These two groups work to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit, which are passed through to the SBOM Forum. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


[i] Sonatype’s OSS Index vulnerability database uses CVEs, but links them to purl identifiers, not CPE names. Since the CVE reports don’t include purls now, this means those links have been developed by Sonatype’s own research, not the input of the CVE Numbering Authority (CNA). When they create the CVE report, the CNA describes the affected product in text form, but the CPE number for that product usually is – or was – created by the NVD. Since there can’t be a one-to-one match between a CPE and a purl, Sonatype utilizes an eclectic mix of methods to link CVEs with open source products identified with purls.

Of course, this is certainly better than not being able to link an open source product to a CVE at all, which is why Dependency Track relies primarily on OSS Index for its vulnerability lookups. By the way, the last time I checked (last December), Dependency Track was looking up vulnerabilities for a component in an SBOM 500 million times a month. Given the growth rate they were experiencing then, it’s not hard to believe they’re now at 7-800 million lookups per month. If you’re keeping score at home, that’s 23 to 27 million lookups every day.

Not that Steve Springett (leader of the OWASP CycloneDX and Dependency Track projects) goes around shouting this from the rooftops. He’s not that kind of guy.

Wednesday, September 11, 2024

A good webinar on vulnerability management, and progress on the CVE front!

This morning (Chicago time), I participated in an excellent webinar that was part of Infosecurity Magazine’s 2-day Online Summit. If you missed it, the recording is already available (separately from the other presentations in the Summit) here. Besides me, the guests included Rose Gupta, Senior Security Engineer of Assured Partners, and Lindsey Cherkovnik, Branch Chief, Vulnerability and Coordination of CISA.

Rose provided a very articulate discussion of the challenge of putting together a vulnerability management program for a medium-to-large-sized organization (especially given the current problems with the NVD). It was refreshing to have a true end user perspective in the webinar, since it’s unusual to hear such a perspective in a security webinar (or even just a security discussion) today.

Lindsey, who is responsible for KEV (a very successful program, IMO) and CISA’s coordination with CVE.org (which CISA funds), revealed some of the big challenges that CISA and CVE.org face. Besides the big slowdown in the NVD that started in February (which is still largely unexplained), there has been a huge increase in the number of new CVEs reported (an estimated 36,000 for 2024, vs. about 29,000 in 2023. That’s on top of about 25,000 in 2022).

Of course, the fact that reported CVEs are increasing is good news, even though it might at first appear otherwise. It’s unlikely that software developers have suddenly had all the knowledge of good coding practices they’ve accumulated over the years wiped from their brains, so they now turn out software that’s loaded with vulnerabilities. Au contraire, the developers are probably a) more aware than ever of weaknesses in their software and b) more willing than ever to inform their customers (and the rest of the world) about a vulnerability after they have made a patch available for it.

Lindsey also pointed to some good news regarding the CNAs – CVE Numbering Authorities. The CNAs are the organizations (now numbering over 400) that assign CVE numbers (e.g., CVE-2024-12345) to new vulnerabilities and create a CVE report for each new CVE. They include large software developers like Microsoft, Red Hat, Oracle and HPE (who mostly report vulnerabilities in their own products), as well as other organizations like GitHub, CISA, MITRE, Amazon and Apache Software Foundation.

The CVE report consists mostly of text when the CNA uploads it to CVE.org; the text describes the vulnerability and identifies one or more products that are vulnerable to it. However, after CVE.org loads the report in their own database, they pass it on to the NVD. Until February 12 of this year, the NVD almost always added three types of machine readable information to the report (a process called “enrichment”):

1.      CWEs (Common Weakness Enumeration),

2.      CVSS (Common Vulnerability Scoring System); and

3.      CPE (Common Platform Enumeration).

In February, the NVD drastically slowed the rate at which they added these three pieces of information to CVE reports that they received from CVE.org. As a result of this slowdown, there are now over 18,000 “unenriched” CVE records in the NVD, which lack these three types of information; these CVE records are essentially invisible to automated searches of the NVD.

For example, suppose a user searches the NVD today for the CPE name of a product they use, and that product has been named in the textual discussion in five CVE reports since February 2024. Since the NVD has enriched fewer than 20% of CVE records this year, that means it’s unlikely the search will locate even one of those CVEs. Thus, the user will go blissfully on their way, thinking their product is quite secure, when in fact it has at least four unpatched vulnerabilities they don’t know about. And since the backlog of unenriched vulnerabilities is increasing almost every day, this problem will only grow over time.

In April 2024, CVE.org (aka the CVE Program) decided that the CNAs should start adding at least the first two of the three items listed above, CWE and CVSS score, to each CVE report that they create. Since the CNA is often the developer of the vulnerable product, it makes sense that they should understand the cause (CWE) and severity (CVSS score) of the CVE described in the report. What was interesting was that the CVE program didn’t require the CNAs to do this, by for example threatening to reject any CVE report that didn’t include those two items.

Instead, the CVE program decided to use positive reinforcement to obtain their goal. Rather than beating the CNAs upside the head to get them to do this, they announced they would publish a “CNA Enrichment Recognition List” every two weeks; CNAs that had included a CWE and a CVSS score on 98% of the CVE reports that they had submitted during the two week period will be recognized in the list published for that period.

In the webinar, Lindsey announced (proudly, I might add) that there were over 100 CNA names on the list for the first two week period, which was released yesterday. Since there are over 400 CNAs now, it might seem that 100 isn’t something to be proud of. However, a lot of CNAs don’t create any CVE reports in a given two-week period. The fact that about a quarter of those who did create one or more reports enriched them 98% of the time is quite impressive. It shows that the CNAs were strongly motivated to do this even though, of course, they received no material reward (don’t be fooled by the fact that the program had “enrichment” in the title. This is the government, after all!).

It's nice to have good news every now and then. It breaks the monotony.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum and its Vulnerability Database Working Group. These two groups work to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit, which are passed through to the SBOM Forum. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Sunday, September 8, 2024

NERC CIP: Why deploying low impact BCS in the cloud shouldn’t be a problem, after all


Until I wrote this post in August, I had always thought there was no problem if a NERC entity utilizes cloud workloads that meet the definition of low impact BES Cyber System – for example, a renewables producer that utilizes cloud-based SCADA software to perform some or all of the functions of their low impact Control Center, even though this same operation would be literally impossible (from a CIP compliance point of view) for a medium or high impact Control Center.

However, in that post, I looked at the most important (in fact, the only) CIP Requirement that applies to low impact BCS: CIP-003-8 Requirement R2. R2 requires the entity with low impact BCS to “…implement one or more documented cyber security plan(s) for its low impact BES Cyber Systems that include the sections in Attachment 1.”

Attachment 1 is found on pages 23-25 of CIP-003-8. It includes five Sections; the Responsible Entity needs to document a plan to address each of those Sections. Using primarily Sections 1 and 3 as examples (although the same argument would apply to Sections 2, 4 and 5 as well), I showed in the post that it would be almost impossible for a NERC entity that utilizes cloud-based low impact BCS to provide the required evidence of compliance at an audit.

The reason I stated for this is the same one I’ve stated for medium and high impact BES Cyber Systems: Since the current NERC CIP requirements were written under the assumption that they would apply to on-premises systems (or else systems located on a third party’s premises that are under the complete control of the Responsible Entity), the required evidence could never be provided by the Cloud Service Provider (or SaaS provider). I concluded that technically, utilizing cloud-based workloads is no more “legal” for low impact BCS than it is for medium or high impact BCS, although I also opined that an auditor was much more likely to give the entity a “pass” for low BCS in the cloud than for mediums or highs. Therefore, I didn’t recommend that anybody rip their low BCS out of the cloud, at least for the time being.

However, on further reflection recently, I realized that this argument doesn’t take into account the unique way in which CIP compliance for low impact BCS differs from compliance for high and medium impact BCS. I’ll warn you that my explanation below requires diving into the deepest and darkest recesses of CIP-002-5.1a:

1.      CIP-002-5.1a Requirement R1 starts by listing six types of “assets” (a term which has never been defined, even though it has played a fundamental role in CIP since version 5 came into effect in 2016. Essentially, it refers to locations in which BES Cyber Systems might be deployed. Any system deployed outside of one of those locations will never be a BCS). R1 states that the Responsible Entity should consider each asset that falls into one of those types “…for purposes of parts 1.1 through 1.3”.

2.      Requirement Part R1.1 mandates that the entity “Identify each of the high impact BES Cyber Systems according to Attachment 1, Section 1, if any, at each asset”. R1.2 mandates the same for medium impact BCS.

3.      However, R1.3 reads quite differently from R1.1 and R1.2. It requires the entity to “Identify each asset that contains a low impact BES Cyber System according to Attachment 1, Section 3, if any (a discrete list of low impact BES Cyber Systems is not required).” In other words, the entity isn’t required to identify low impact BCS at all – just the assets that contain them. In fact, the parenthetical expression warns the auditor not to even ask to see a list of low impact BCS.

4.      Clearly, we need to run down to Section 3 of Attachment 1 (page 16 of the standard) to figure out what’s going on here. However, when we get there, we learn that we’re really identifying BES Cyber Systems after all, not assets, as we were just told. Specifically, we need to identify “BES Cyber Systems not included in Sections 1 or 2 above that are associated with any of the following assets and that meet the applicability qualifications in Section 4 - Applicability, part 4.2 – Facilities, of this standard”. This is followed by the same list of six asset types we saw in R1.

Perhaps the above steps seem confusing to you. They certainly are to me, and they have been since I first studied the language of CIP-002-5 immediately after FERC announced in April 2013 that they were going to approve the CIP version 5 standards and “de-approve” the CIP v4 standards (which they had approved almost exactly a year earlier – that’s another story). Here is what I currently understand regarding identification of low impact BCS and the assets that contain them:

As already noted, Requirement R1 Part 1.3 doesn’t mandate that the entity identify low impact BCS, as R1.1 and R1.2 do for medium and high impact BCS respectively. Instead, it requires that they identify “assets that contain low impact BCS”. It refers them to Section 3 of Attachment 1 for information on how to identify those assets.

But Section 3 starts with the words “BES Cyber Systems not included in Sections 1 or 2 above…” That seems to indicate that the entity needs to start the asset identification process by identifying all the BCS they own or operate, regardless of impact level. Then, they subtract from that universe the high impact BCS they identified in Section 1 and the medium impact BCS they identified in Section 2. The BCS that are left are low impact. That sounds simple. What’s the matter with that?

Of course, the problem here is that R1.1 has already stated that “a discrete list of low impact BES Cyber Systems is not required”. So, it seems we didn’t really start by identifying BCS of all impact levels; instead, we started with the universe of BES assets owned or operated by the Responsible Entity. We subtracted the high and medium impact assets identified in the Section 1 and Section 2 criteria. The remaining assets are low impact.

However, R1.1 indicates clearly that there are no low impact assets (it doesn’t refer to medium or high impact assets either, of course); there are only “assets containing low impact BCS”. It seems clear that a low impact BCS is simply a BCS contained in a low impact asset.

We’re now at the point where we can answer the question of how to determine when a BES Cyber System is low impact. The answer is that a low impact BCS is one that’s contained in a low impact asset. However, a low impact asset is defined as one that contains a low impact BCS. Of course, this is a perfectly circular argument: A low impact asset is one that contains a low impact BCS, and a low impact BCS is one that is contained in a low impact asset.

Of course, this post is about low impact BCS in the cloud. Even though I said in August that I don’t think low BCS are “legal” in the cloud, I started this post by implying that I’ve changed my mind again: I now believe there is no impediment to deploying low impact BCS in the cloud. I say this because of the “definition” of low impact BCS that I’ve just derived: a BCS that is contained in a low impact asset.

Can “the cloud” ever be a BES asset – low, medium or high impact? Since it’s not on the list of BES assets in CIP-002-5.1a R1.1, it clearly can’t be a BES asset now. It might be in the future, but the important thing to know about the future is that it hasn’t happened yet.[i] The point is, since a low impact BCS is always deployed in a low impact asset and the cloud isn’t a low impact asset, a workload that would meet the definition of low impact BCS, if it were deployed in one of the six BES asset types, has no meaning for CIP compliance if deployed in the cloud.

Therefore, there is no such entity as a low impact BCS deployed in the cloud, any more than there are entities like Bigfoot or LGMs (little green men) from Mars. None of these entities is subject to compliance with the NERC CIP standards.

Q.E.D.

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I don’t think Yogi Berra ever said that, but it certainly sounds like a “Berrism”. Can I patent it?

Tuesday, September 3, 2024

Webinar: “Tackling rising software vulnerabilities sustainably”

Next Wednesday at 10:45AM EDT, I’ll be participating in a webinar with the above title, which is part of the Autumn Online Summit of Infosecurity Magazine. Here’s the official description of the webinar:

Software vulnerabilities are one of the biggest cyber-threats to organizations, with a record number of vulnerability disclosures in 2023. Zero days are being actively exploited by sophisticated threat groups, as demonstrated by the Ivanti vulnerabilities that were discovered earlier this year.

The continuous process of applying fixes to vulnerabilities across all software stacks is an overwhelming task for many organizations. A new strategy is needed to make vulnerability management a sustainable and effective process.

A panel of experts will discuss best practice approaches for a modern vulnerability management program, tailored to business risk and prioritization.

Sign up (for free) to hear:

·        How threat actors target software vulnerabilities, and why this tactic is often so damaging

·        Vulnerability management challenges across an increasingly complex tech stack

·        How to create a sustainable software patch management program, tailored to business needs.

One of the other two panel members is Lindsey Cerkovnik, Branch Chief, Vulnerability Response and Coordination, CISA. In our prep meeting last week, I brought up the fact that the NVD now has a huge backlog of CVEs lacking a CPE software identifier. Since a CVE report without a CPE name is currently useless for purposes of automated vulnerability management, this means any organization searching the NVD for vulnerabilities applicable to the software products it uses will see only a tiny percentage (less than 1%) of vulnerabilities that might apply to those products, if those vulnerabilities were identified after early February.

Lindsey pointed out that CVE reports can now include purl identifiers for software, so the lack of CPEs today might become a non-issue soon. However, I noted that at the moment, there are a number of roadblocks to including purls in CVE reports. These include the fact that there’s no agreed-upon method for creating a purl for a proprietary software product – although I also noted that the SBOM Forum has two good ideas for how to overcome this problem.

It should be an interesting webinar! If you can’t make it, a recording will be available for anybody to access.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum and its Vulnerability Database Working Group. These two groups work to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit, which are passed through to the SBOM Forum. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.