Tuesday, August 27, 2024

NERC CIP and the cloud: the user perspective


As I’ve mentioned before, the informal NERC Cloud Technical Advisory Group (CTAG) and SANS are currently sponsoring a series of six webinars (which will probably be extended by one or two) collectively titled “Cloud Services and CIP Standards – Opportunities and Challenges”; all the webinars are being recorded and posted on the SANS website. The second of those webinars occurred more than two weeks ago. It might end up being my favorite of the six, even though only three have taken place so far. Note that the last three webinars in the list I posted on July 3 have been postponed by a few weeks, but the new dates aren’t certain yet. A seventh webinar will also be added. Once I know the details of these, I’ll post them.

The reason I liked the second webinar so much is that it was focused on the needs of end users. The two presenters very carefully explained why it’s important that the NERC CIP standards be revised as soon as possible, to permit NERC entities with medium and high impact environments to take full advantage of the cloud. Both presenters – Peter Brown of Invenergy and Luke Oman of Midcontinent ISO – were able to articulate clearly how the current restrictions on cloud use by NERC entities with medium and/or high impact BES Cyber Systems are complicating life for entities like them. They agreed that these complications are literally making the grid less safe, not more so.

Here are some notes I took on their presentations, although I strongly recommend you listen to the full recording. Peter and Luke’s comments are in roman script, while mine are in italics. None of the quotations are verbatim unless set off with quote marks.

Peter Brown

·        One ironic result of the CIP cloud restrictions is that low impact BES systems can use advanced security services in the cloud like EDR. Meanwhile, medium and high impact BES systems, especially in Control Centers, are restricted to using older A/V software, because that is what’s available for on-premises use.

·        Another ironic result is that OT systems that can’t be implemented in the cloud get “left behind” when it comes to getting access to new services, etc. By this, Peter meant that in many IT departments, the focus is on moving to the cloud. This means that staff members naturally want to focus on what they need to know to advance their careers, which of course means the cloud. OT gets left behind in the “hearts and minds” of IT.

·        One problem that is sure to come up, once new CIP standard(s) are developed to fix the “cloud problem” is that NERC entities will be slow to adopt them, since almost nobody wants to be the pioneer.

·        Tom’s note: This problem can probably be mitigated by putting in place the equivalent of the V5TAG, short for CIP Version 5 Technical Advisory Group. The V5TAG was a group of NERC entities – observed by NERC ERO staff and others – who pioneered use of the CIP version 5 standards in 2015 and 2016 with no risk of penalties for non-compliance (CIP v5 was a complete rewrite of the CIP standards and definitions, which is why there was a lot of concern about having a smooth transition). The V5TAG was created after FERC had approved the v5 standards, but before they became enforceable.

·        A good example of slow adoption is CIP-004-7 and CIP-011-3, the revised standards that came into effect on January 1 of this year. They were drafted to finally make use of BCSI “legal” in the cloud. However, very few NERC entities are taking advantage of them now.

·        Peter attributes this to the lack of good compliance guidance. The only guidance on CIP-004-7 and CIP-011-3 at all is the already-existing document that was unexpectedly approved by NERC as “implementation guidance” at the end of 2023. I’ve heard there are already calls to create something better than that; I agree with them. In fact, there needs to be a whole education program on BCSI in the cloud; this needs to be combined with education on SaaS that utilizes BCSI. The lack of SaaS/BCSI guidance has meant that, other than one SaaS configuration management product that was already in use by a number of NERC entities six years ago, I know of no other use of SaaS with BCSI today.

·        Another reason why NERC entities probably won’t rush to comply with the cloud CIP standards is that early adoption of a new or revised standard requires a big effort. Peter said, “Without helpful information from peers, guidance from the ERO, and being able to learn from others’ audit experience, extra research and time to analyze the guesswork options are required.”

·        A cloud provider can do patching better, and much more efficiently, than any NERC entity by itself.

 

Luke Oman

·        It seems that the “best of breed” software and security service providers are moving to the cloud. This is especially true for services that perform EACMS (Electronic Access Control or Monitoring) functions, including multifactor authentication and external security monitoring. However, if a cloud-based service performs EACMS functions for NERC CIP high and/or medium impact BES environments (even if those services only form a small portion of the services performed, and even if the NERC entity doesn’t need them), the NERC entity customers will probably be in violation of 100 or more CIP requirements (including those for protecting an ESP and a PSP). This is because the SaaS provider (or CSP) could never furnish the required compliance documentation for them.

·        Most of all, MISO wants to have options for the software and services they use. As software and service providers move to a cloud-only model, they are losing those options. When they’re constrained to just one or two providers of an on-premises solution, they will likely face higher prices, as well as lower service and functionality levels.

·        Configuration management and physical access control can be performed both better and more efficiently by the CSP.

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Wednesday, August 21, 2024

NERC CIP in the cloud: Time for the Hail Mary play?


Last Friday, the Project 2023-09 Risk Management for Third-Party Cloud Services Standards Drafting Team (SDT) held their first meeting. It’s an excellent and very experienced team (for example, one member has served on a CIP SDT since 2008 – and has contributed a huge amount to the CIP standards in that time); I’m looking forward to attending as many meetings as possible.

However, I realized early in the meeting that the team’s current focus is entirely on the Standards Authorization Request (SAR), which serves as the “statement of work” for the project. Like all NERC SARs, this one was first approved by the NERC Standards Committee in December. In some cases, an SDT is allowed to utilize that version of the SAR unchanged, but in other cases (don’t ask me why) the SDT is required, or at least allowed, to make modifications to that version. Whatever the reason, this SDT is going to spend the rest of this year revising their SAR.

I’m not objecting to the SDT revising their SAR, since, even if the team had started to develop new standard(s) right away, they would have needed first to conduct the same sort of fundamental discussions that started on Friday. In Through the Looking-Glass, Alice asks the Cheshire Cat, "Would you tell me, please, which way I ought to go from here?". The Cat replies, "That depends a good deal on where you want to get to”.[i] The SDT knows they’ll never get anywhere unless they first decide where they’re going. The revised SAR will document that decision.

But I also understand that the road ahead for this team is a loooong one. In fact, in this post in January, I estimated that the elapsed time between when the SDT will start developing a new standard and when that standard (plus any required changes to the NERC Rules of Procedure, which also may be needed in this case) would come into effect would be  5 ½ years. Specifically, if the SDT started work on a new standard or standards on July 1, 2024 (as I estimated when I wrote the post), NERC entities could expect compliance with those standards to be mandatory by the end of 2029.

What surprised me on Friday was the SDT’s proposed timeline: It starts with the first meeting and ends in mid-December, when the SDT will turn over their revised SAR to the Standards Committee. What will happen to the revised SAR after the SC gets it? They will probably approve it, which shouldn’t take very long.

However, I’m told that the next step will be for the SAR to be put through the NERC balloting process – the same process that the standards themselves will go through once they’re developed. That process almost always requires multiple ballots by NERC members, as well as comment periods in between the ballots. It's safe to say this step alone will require six months. Only when the SAR has gone through that process will the drafting team be able to get to work on drafting whatever new or revised standards are required.

I was quite disappointed when I heard this, since this the SAR balloting alone will probably add six months to the timeline for the new CIP standards to come into effect. And because I wasn’t expecting that revisions to the SAR would take four months, the two changes add almost a full year to the whole process. Therefore, my new estimate of when the “cloud CIP” standards will become effective is around the end of 2030, or almost 6 ½ years from now.

However, as I pointed out in the January post (and have pointed out in other posts since then), the NERC CIP community can’t wait much longer to be able to make full use of the cloud. This is because more and more software and security service providers are announcing they will soon move exclusively to the cloud, or that they will henceforth make improvements to their product available in the cloud first and only later (or sometimes never) in the on-premises version.

In fact, in the SANS/CTAG webinar today – which you can watch here soon – Ruston Johnson of Splunk showed a chart indicating that updates to their on-premises product will occur more slowly than updates to the cloud product (although he emphasized that they have no plans to discontinue their on-premises product, which evidently had been rumored). Both the security and the reliability of the grid will soon be impacted by this trend, although security has already been impacted by it for years[ii].

So what’s Plan B? In April, I described this possible “shortcut” to full cloud usage, based on a reasonable interpretation of the wording of CIP-013-2 Requirement R1 Part R1.1. This seems even more reasonable to me now, given that I don’t know of any good option other than the “nuclear” one allowed by the NERC Rules of Procedure. That option requires the NERC Board of Trustees essentially to declare a “compliance emergency”[iii]. This would be a last resort and in any case is currently out of the question, since there’s not even discussion of it yet.

As far as I can see, my shortcut doesn’t contradict anything in CIP-013 or any other current CIP standard. However, I admit that NERC entities aren’t likely to try this option, unless there’s a statement by some body within NERC that this is a reasonable interpretation of the wording of CIP-013-2 R1.1. I think it’s time to at least look at this more seriously. Six and a half years is a long time to wait for something that’s essential!

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Alice then says, "I don't much care where" and the Cat responds, "Then it doesn't much matter which way you go". It’s hard to argue with that logic!

[ii] I say this because some of the most important security monitoring services are exclusively cloud-based, meaning that, at least since CIP version 5 came into effect in 2016, NERC entities with high or medium impact BES Cyber Systems have not been able to use those services.

[iii] I’m greatly simplifying what this option involves, and “compliance emergency” is my term. The point is that this measure is drastic. It shouldn’t be the first choice for solving this problem.

Monday, August 19, 2024

The purl of great value

In the OWASP SBOM Forum (as well as the OWASP Vulnerability Database Working Group, a part of the SBOM Forum project), we have begun to focus our efforts on what I think is the most important issue in vulnerability management today: the need to extend the purl identifier so it covers proprietary as well as open source software.

We have made a lot of progress in our most recent two meetings, on August 9 and August 16. This is mainly because Steve Springett (leader of the OWASP Dependency Track and CycloneDX projects, as well as one of the early contributors to purl and still a purl maintainer) was able to join us along with Philippe Ombredanne, the creator of purl (and still leader of the purl project), who like Steve is a member of the SBOM Forum.

The question we’re trying to answer is how purl can be made an (almost) universal software identifier (it’s already by far the leading identifier for open source software). The most important part of this problem is developing a way to extend purl to cover proprietary (“closed source”) software products. You can read about our efforts so far in this recent blog post.

During the meetings on the 9th and 16th, we discussed two proposals for how purl can be extended to closed source software. These aren’t mutually exclusive, since they would allow purls to be created for different collections of closed source software products. One of these proposals will not be difficult to implement; the other will be difficult (mainly from the human interaction point of view. There are no technical challenges), but certainly not impossible.

The less-difficult (I won’t call it “easy”) proposal is based on an idea that Steve Springett brought up while the SBOM Forum (not yet part of OWASP at the time) was developing this white paper on software identification in 2022. We didn’t include the idea in that paper, but Steve brought it up again when we started discussing how to extend purl two weeks ago.

A little background: purl is based on the concept of a repository for the software binaries: usually, a package manager for open source software. While the name and version string of a software product can vary widely between different package managers, they will never vary within the package manager – that is, the product/version pair that identifies a product available within the package manager will always be the same, although the same product in a different package manager may well have a different product name or version string, even though the binaries might be identical.

This means that someone that wishes to name a particular product/version available in a package manager like Maven Central will be able to do so using just three pieces of information (many other fields are allowed, but they aren’t mandatory):

1.        The purl Type, in this case “maven”[i];

2.        The name of the product; and

3.        The version of the product (i.e., the version string).[ii]

This means that the purl created by the organization that reports the vulnerability (perhaps in a CVE report) should always exactly match the purl created by an end user or developer that wants to find about vulnerabilities in a software product they use. If they are the same product and version and they’re found in the same package manager, the purl will always be the same, unless the person that created it made a mistake.  No central database lookup is required to find the correct purl, as there is for the CPE identifier (and even finding a CPE name through an NVD search doesn’t guarantee that it’s the same product that the CVE applies to – see the discussion on pages 4-6 of our 2022 white paper).[iii]

Of course, proprietary software – software developed by commercial organizations like Microsoft™, which is not usually made available for free, at least not through Microsoft’s commercial distribution channels – isn’t available in package managers. However, Steve Springett realized that online software “stores”, like the Apple Store, Microsoft Store and Google Play, are very similar to package managers, in that they offer a huge number of products available in the store, that can all be downloaded from a single URL – that of the store.[iv]

Steve suggested that it wouldn’t be hard to add a purl Type for any software store that wishes to participate (there are a lot of online software stores, although the three I just mentioned are probably the three biggest. They each probably have millions of individual product/versions available for download). Since Steve helped Philippe Ombredanne develop purl originally and is now a purl maintainer, he knows what he’s talking about when he says this.

Since Steve is already working with Apple on something else now, he will try to at least identify who is the person there that we need to talk to about this. If they’re interested, maybe we could work with them as a guinea pig on this idea. However, we can certainly use multiple guinea pigs, so if you are part of an online software store or know of a store that might want to work for us, please email me.

The second proposal for extending purl to cover proprietary software is the one that the SBOM Forum described on pages 11 and 12 of the white paper linked earlier, although there were no details on how this would be implemented. The idea (which was Steve’s, of course) was to create a new purl type called SWID. The software supplier will create a SWID tag and distribute it with the binaries for a new product/version. An end user, in order to search for vulnerabilities in a product they use, can create the correct purl, based on the information in the SWID tag.

The full SWID specification is complex, but fortunately creating a purl with the SWID Type is straightforward. In fact, Steve has developed a purl SWID type generator that just requires input of the required fields.

However, Steve pointed out in last Friday’s meeting that he isn’t sure this specification is really going to be a useful identifier; he needs some software developers to test the spec with their products – i.e., a proof of concept. If you work for a developer who might want to participate in this, please email me.

I was quite happy that last Friday’s meeting produced ideas for two concrete steps – finding a software store willing to test Steve’s first idea, and conducting a small proof of concept of Steve’s second idea – that will move us forward on extending purl to cover proprietary software. But implementing both of these ideas will take a fair amount of work.

Let me repeat why it’s important that we move in this direction: Because the NVD appears to be close to dead in the water, CPE is probably near death as well, since its existence is very tied to the NVD. As the SBOM Forum explained in our 2022 white paper (pages 4-6), CPE is a very problematic identifier. While I’m not advocating that the 20-something years of CVE/CPE correlations currently found in the NVD be thrown away, I don’t want CPE to be the only show in town much longer. The sooner we can extend purl to proprietary software and have it take over as the primary software identifier worldwide, the sooner we can achieve that goal.

I want to point out that I would personally love to lead both the above efforts. However, I’m already donating a large amount of time to the SBOM Forum and the Vulnerability Database Working Group. Being an independent consultant, I can’t donate more than that, but if we can get financial support, I could lead both efforts. Organizations or individuals can give “restricted” donations to support these two efforts through OWASP (a 501(c)(3) nonprofit organization) and have them directed to the SBOM Forum. In many cases, this donation will be tax-deductible.

Please let me know if you or your organization can donate time, funds or both to this project!  


[i] For a complete list of purl types, go here.

[ii] The version string is technically optional, but it is hard to think of many use cases in which it would not need to be included. This especially applies to vulnerability databases (our main concern, of course), since a vulnerability should always be reported only for the product version(s) where it is found. For example, saying that a vulnerability is found just in “Oracle Server”, without specifying the version(s) of Oracle Server, is meaningless.

[iii] For an in-depth discussion of the importance of “intrinsic identifiers” like purl – as opposed to extrinsic identifiers like CPE – see the SBOM Forum’s 2022 white paper.

[iv] The fact that the software in an online store is for sale, whereas the software in a package manager is available for free, doesn’t change the fact that they are functionally identical: they both provide a single download location for many software products, in which the name of each product will not change between versions.

Saturday, August 17, 2024

Do you want an update on the NVD? Are you sure you want to see it?

On Friday, before our weekly OWASP SBOM Forum meeting, Bruce Lowenthal of Oracle and Andrey Lukashenkov of Vulners provided the group an update on how the National Vulnerability Database (NVD) is doing in getting out of the huge hole it started digging for itself on February 12 of this year. The last update I provided was exactly one month ago. You’ll be pleased (?) to hear they’re making progress: the hole is getting deeper by the day!

The “hole” I’m referring to is the number of new CVE (vulnerability) reports released by CVE.org, that the NVD has not “enriched” by adding a CPE name to them. To learn why having a CPE name (or more than one) in the report is so important, see the post I just referenced, and also this post. Before February 12, the NVD usually enriched almost every new CVE report they received every month. However, starting on that day, the number of CVEs enriched each day dropped to literally zero on some days, and not much more than that on other days.

A month ago, Andrey said the NVD had a backlog of 17,000 vulnerabilities since February 12; yesterday, he said the backlog is “well over” 18,000. Bruce added some great details:

1.      After enriching only a tiny percentage of CVEs between February 12 and June 1, the NVD started at least enriching some – although they were still increasing the backlog by 75-100 CVEs a day (I developed that estimate, based on data that Andrey provided me a month ago).

2.      However, it seems even that was too good to last, since Bruce says that now the NVD seems to have stopped enriching June CVE reports, after enriching about 37% of them. Not only have they abandoned June CVEs, but they have mostly skipped over July altogether, and are concentrating on August CVEs.

3.      He provided some interesting details: Since last Friday, 652 new CVEs have been published by CVE.org for July and August (and none for June, mind you. So June remains at 37% enriched). But the NVD only enriched 37 CVEs from July and 174 from August. Given that we’re in the middle of August but July is over, one would think they would be enriching many more from July than August; instead, it’s the other way around.

4.      In any case, the 211 CVEs enriched since last Friday means the backlog grew by 652 – 211 = 441; this is within the 75-100 daily range I estimated a month ago. At least you have to credit the NVD’s consistency: they’re growing the backlog by a fairly steady amount every day. On the other hand, I don’t think they deserve hearty thanks for that dubious achievement.

5.      And what has the NVD said about all this? The last time they said anything about the backlog was May 29, when they announced, “We anticipate that that this backlog will be cleared by the end of the (government) fiscal year.” Let’s see…The FY ends on September 30, which is 44 days from now. To be charitable, let’s say the backlog is currently 18,000 and it’s growing by 61 a day (the daily rate it grew since Friday, August 9); that means it will be 20,684 on September 30. Since they’re enriching about 30 CVEs a day (at least since last Friday), that means they will have to crank up their daily enrichment rate from 30 to 470 per day, which is an increase of over 1400 percent! Do you think they can do this?...I didn’t think so.

The worst part about this whole episode (if that’s the right word. “Total collapse” would be better) is that they’ve provided zero useful information on a) the cause of the problem, b) when the problem will be fixed (meaning they’ll not only eliminate the backlog, but get back to an enrichment pace with zero backlog growth), and c) what they’ll do to prevent the problem from recurring. Given their silence, I have to assume the answers are:

a)      We aren’t going to tell you.

b)     Never.

c)      Nothing.

Have a good day!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum and its Vulnerability Database Working Group. These two groups work to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit, which are passed through to the SBOM Forum. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.

 

Friday, August 16, 2024

What’s needed in the "cloud version” of NERC CIP?

Today (August 16) marks the first virtual meeting of the Standards Drafting Team called “Project 2023-09 Risk Management for Third-Party Cloud Services”. This team will draft the changes to the NERC CIP standards (and perhaps the NERC Rules of Procedure as well) that are required to finally make full use of the cloud completely “legal” for all NERC entities subject to compliance with the CIP standards. I plan to attend that meeting (which is very unlikely to get into any details regarding what will be proposed), but here – for the record – are the changes that I think are most needed:

First, it’s vital that CIP compliance for on-premises systems be as unaffected as possible by these changes. Of course, CIP for cloud systems will have to be very different from the CIP standards that are in place today, but if we try to create a single compliance program for both environments, we will probably end up with a debacle like what occurred in 2018, when a different SDT started down the road of radically revising CIP for everybody and got slapped down hard for doing so.

I fully supported that team’s efforts, and I still believe it’s necessary that all of CIP be “reformed” - although not today. However, the pushback the 2018 team (which is still in operation today – CIP Modifications SDT) received from the large IOUs (who understandably were reluctant to throw out compliance programs in which they’d already invested millions of dollars) forced the SDT to retreat. Let’s not repeat the 2018 experience. Been there, done that, got the T-shirt.

Instead, this new effort needs to result in two “tiers” of CIP compliance (as I discussed in this post) - one almost exactly like the existing CIP standards and only applicable to on-premises systems, and one that applies only to cloud-based systems. Fortunately, the Standards Authorization Request (SAR) on which the group is operating recommends this system, although the SDT is always allowed to do something differently from what the SAR recommends.

Second, it’s vital that any new requirements for cloud-based systems be risk-based, not prescriptive. Compliance with prescriptive requirements like CIP-005 R1, CIP-007 R2, and CIP-010 R1 would literally be impossible for systems based in the cloud. There are far too many ways in which cloud systems can be implemented than could possibly be accounted for in prescriptive requirements.

Third, and probably most important (as well as hopefully most obvious), any requirements that apply to cloud-based systems cannot be device-based. There is simply no way that a cloud service provider can keep track of individual devices (whether physical or virtual or both) on which a cloud-based system like a BES Cyber System is installed, especially since the BCS (or more accurately, parts of the BCS) could be installed on different devices, in different data centers, the next minute. The concepts of Cyber Asset, BES Cyber Asset and Protected Cyber Asset have no place in the cloud; the “cloud track” CIP requirements need to start with identification of BES Cyber Systems, independently of the hardware or VMs on which they may be installed at any particular moment.

Fourth, any requirements that apply only to cloud systems need to take into account the fact that CSPs, given the slew of heavy compliance regimes they need to follow all the time, have almost certainly covered the basics of cybersecurity much better than any electric utility, no matter how large, could ever do. Let’s not worry about whether Humongous CSP no. 1 has applied every O/S patch released in the last 35 days, or whether Gargantuan CSP no. 2 is constantly verifying that only ports and services that are required to be open in fact are open. Let’s assume that the FedRAMP and ISO 27001 auditors have taken care of those issues (although it’s important to at least check the audit results made available to customers by the CSP).

Instead, how about asking these questions?

1.      Does the user database in a SaaS application include users from every country and industry, as well as every security level – an issue called multi-tenancy?

2.      Does the CSP adequately vet its third party access brokers?

3.      Does the CSP take adequate steps to ensure its customers understand how securing infrastructure in the cloud is different from securing it on-premises?

My guess is none of these questions (or many others like them) is even asked in a FedRAMP or ISO 27001 audit. These are questions that the drafting team should focus on.

Fifth, it can’t be up to each NERC entity to judge the level of security of a CSP. For one thing, the CSPs would never accept being “audited” by every entity; for another, most entities are likely to do a very superficial assessment of a CSP whose services they use, since they depend as much on the CSP’s good graces as the CSP depends on theirs.

Instead, NERC – or a third party engaged by NERC – needs to do an assessment of each platform CSP (and perhaps larger SaaS providers as well), based on questions like the ones immediately above, as well as inspection of their FedRAMP and/or ISO 27001 audit results (and penetration tests, in the case of FedRAMP). They then need to share this assessment with any NERC entity that asks to see it.

NERC’s assessment shouldn’t be up-or-down; instead, it should simply describe the information the assessors received from the CSP, including answers to questions and any audit results. The NERC entities are all free to contract with anyone they want.

I am going to follow the SDT’s deliberations as much as I have time for and will be posting on what I learn (and other recommendations, I’m sure). Stay tuned!

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Thursday, August 15, 2024

We have our work cut out for us

I and some other members of the OWASP SBOM Forum have come to realize - ever since we released this white paper almost two years ago - that in the long run there's no alternative to making purl the universal software identifier for vulnerability databases. However, it seems the long run has now become the short run, because the NVD (National Vulnerability Database) has dug itself into a deep hole in the creation of CPE names; the NVD uses CPE names to represent both open source and commercial software products. The NVD essentially stopped adding CPE names to CVE reports in early February of this year. 

Why is this a problem? Because a CVE report without a machine-readable software identifier is like a car without a steering wheel: You know there’s a new vulnerability, but there’s no automated way to learn what software products are vulnerable to that CVE. Of course, you can always read the textual product descriptions in every backlogged CVE report, if you have time for that - but most of us want an automated way to do this. 


CPE is the only software identifier supported by the NVD. Before February, the NVD staff read every new CVE report released by CVE.org and assigned a CPE name to the vulnerable product(s) named in the report. That CPE name could then be entered in the NVD search bar by a user to look up all CVEs that had identified the product (and usually, one version of the product) affected by the CVE.


However, since February the NVD has mostly stopped doing this. The NVD now has a backlog of more than 17,000 CVE reports that are missing CPE names; that backlog is growing every day. This means that searching for vulnerabilities that apply to a product today will miss most vulnerabilities that were identified from February on. Since it’s likely that the great majority of NVD searches are for new vulnerabilities, this means that automated vulnerability management based on the NVD is currently impossible for almost all practical purposes.


Besides CPE, the only other widely used software identifier today is purl. Purl is already by far the leading (and for all practical purposes the only) software identifier for open source software (OSS) packages in vulnerability databases that are dedicated to OSS. For a description of how purl came about and why it’s so special, see this article by Philippe Ombredanne, the creator of purl (and a member of the OWASP SBOM Forum). 


Purl has without a doubt become the leading identifier for open source software worldwide. However, today purl has no generally accepted way to represent proprietary software; this is the biggest problem preventing purl from being the “universal software identifier” today.


Before last Friday’s SBOM Forum meeting (August 9), I asked Philippe and Steve Springett (leader of the OWASP Dependency Track and CycloneDX projects and also an SBOM Forum member. Steve worked with Philippe to specify purl in the early days, and is still a maintainer of the purl project) to discuss options for extending purl to proprietary software. 


My request to Philippe and Steve to speak to the SBOM Forum last Friday was partly based on a request from Bruce Lowenthal, Senior Director of Product Security at Oracle (and - you guessed it! - a member of the SBOM Forum). Like me and many others, Bruce is concerned about how software users can easily learn about newly identified vulnerabilities that apply to software products they use to fulfill their organization’s mission. 


The SBOM Forum’s 2022 white paper detailed (on pages 4-6) some of the many problems caused by reliance on CPE names as the primary software identifier. That paper urged a gradual move away from relying completely on CPE as an identifier. However, we knew at the time that this would be a huge job. Given that before this February, the NVD was doing a decent job of producing CPE names for all products identified in the text of CVE reports, it didn’t seem like a big problem to live with CPE’s problems for a few more years, while the foundations could be laid for an eventual move to purl as the primary identifier for all software. However, the NVD’s actions (or lack thereof) since February 12 have lent new urgency to solving this problem.


This wasn’t the first time we’d had this discussion. In 2022, as the SBOM Forum (not yet part of OWASP) was discussing our white paper on fixing problems with software identification in the NVD, we realized we needed to describe some way that purl could be extended to cover proprietary software. In 2022 and last Friday, Steve first pointed out a fairly easy way to accommodate a large number of proprietary software products, although nowhere near all of them. Steve’s idea was (and is) quite ingenious.


The central feature of purl is the fact that it doesn’t require any central database. As the 2022 paper explains at length, the purl for an open source package in a package manager is based on the name of the package manager (which determines the “Type”), as well as the name and version string of the software in the package manager - that’s all that’s required. Because the package manager is a controlled namespace, anyone who accesses the package manager will find the same product name and product version string. Therefore, everyone will be able to create the same purl for the product. There’s no need to look anything up in order to find its purl - at least for open source packages. If you have downloaded the product, you already have all the information you need to create the purl.


Steve’s idea was that a proprietary analog to a package manager is an online store like Google Play or the Apple Store. Like the package manager, the store is a controlled namespace; the name/version of the software won’t vary in the store. Therefore, a purl type could be created for the store; the actual purl would just consist of the type, as well as the name and version string of the product in that store. If the product/version pair doesn’t change, the name and version string won’t change, either. As long as someone obtained their software from the store, they should always be able to create the correct purl for it in the format: scheme:type/namespace/name@version?qualifiers#subpath (optional items in red)


Using purl to identify software in online “stores” will without a doubt reduce the number of proprietary software products that aren’t currently covered by purl. However, there remain a lot of proprietary products (perhaps the majority) that are only available through other means, like salespeople or call centers. For these products, Steve proposed a solution based on SWID tags.


In the SBOM Forum’s 2022 white paper, we proposed a new purl type called “SWID” (which Steve then got added to the purl spec). Steve suggested that software suppliers could create a SWID tag for each of their product/versions. The required fields in the SWID tag (meaning the fields that must be in the SWID tag in order to create a purl with type SWID) are here . They form a small subset of the 80 or so available fields in SWID, but they are all that’s needed to create the purl. 


The original idea of SWID was the tags would be distributed with the software binaries (and some suppliers like Microsoft distributed SWID tags with all of their software for a couple of years). When the end user wants to learn about vulnerabilities in their software, they can retrieve the SWID tag from the binaries and create the purl; in fact, Steve has developed a purl generator that takes input from the SWID tag.  The user can use the purl to learn about vulnerabilities identified in the product.


However, the two biggest problems with this proposal are (a) it doesn’t address how SWID tags will be created for legacy software, and (b) it doesn’t address how SWID tags (for both legacy and current software) can be discovered online. There are various ways that both of these goals can be accomplished, but some group needs to take it upon themselves to describe a workable solution for both (I’m suggesting that the supplier that currently owns the legacy product should be responsible for creating a SWID tag for every version of that product, and that each supplier could have a text file of SWID tags for its legacy and current products at a well-known location on its website - e.g. companyname.com/swid.txt. But there are certainly many other ways to create and distribute purls for proprietary software, whether or not they’re based on SWID tags).


The SBOM Forum and the purl working group could take on responsibility for this, but it will require funding to do it properly. The funding can be in the form of donations to OWASP that are restricted for the SBOM Forum. Any organization interested in donating for this purpose should email Tom at tom@tomalrich.com.


Even if these efforts can start soon, it will probably take at least 2-3 years before purl can become the universal software identifier that’s needed. However, there isn’t any alternative that I can see. There’s no longer a reliable source of CPE identifiers for CVE records, and there seems to be very little enthusiasm for trying to change that situation. This isn’t surprising, since CPE has lots of problems.


This means it will be at least 2-3 years before truly automated vulnerability management is possible again. This is discouraging, of course, but what’s worse is if automated vulnerability management will never be possible again, other than for open source software. Automated vulnerability management for open source software is alive and well today, because almost all vulnerability databases for OSS use purl to identify software packages. This shows there’s really no alternative to starting to work on making purl the software identifier for proprietary software as well.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum and its Vulnerability Database Working Group. These two groups work to understand and address issues like what’s discussed in this post; please email me to learn more about what we do or to join us. You can also support our work through easy directed donations to OWASP, a 501(c)(3) nonprofit, which are passed through to the SBOM Forum. Please email me to discuss that.

My book "Introduction to SBOM and VEX" is available in paperback and Kindle versions! For background on the book and the link to order it, see this post.


Thursday, August 8, 2024

NERC CIP and the cloud: Are lows “legal”?

Yesterday, SANS and the unofficial NERC Cloud Technical Advisory Group (CTAG) sponsored the second webinar of a series of six on use of the cloud by NERC entities subject to compliance with the CIP Reliability Standards. You can listen to the webinar here (and if you want to register for the next webinar, go here. You will have to sign up for each webinar separately, although you can “register” for the webinar even while it’s in progress).

One of the speakers was my longtime friend Peter Brown. He is with Invenergy, one of the largest renewable energy providers. He gave a good presentation on how they use the cloud today, and more broadly on how he sees the renewable energy industry using the cloud. He pointed out that many renewables providers only have low impact BES Cyber Systems. He said that made it easy for them, since there are no real limitations on deploying low impact systems in the cloud.

I used to say the same thing until earlier this week. Then, I was looking through CIP-003-8 R1 and R2 (the substance of R2 is in Attachment 1, found later in the standard). There is a problem with these two requirements, which becomes apparent if you ask the question, “How will the entity provide compliance evidence for this requirement if some of their BES Cyber Systems are deployed in the cloud?”

Let’s start with R1.2, which requires that a NERC entity with low impact BCS develop policies for cyber security awareness, physical and electronic security controls, etc. Let’s say that for awareness, the entity develops a policy that reads roughly, “We will conduct multiple cybersecurity awareness activities for our staff, including emails, posters, and lunch ‘n’ learns every month.” Of course, they can provide lots of evidence that they have followed this policy.

But what about evidence for their CSP, if they have BCS in the cloud? Is the CSP bound to follow this policy for their own staff members? Of course, their own awareness policy might well be stricter than the NERC entity’s policy, due in part to their need to comply with ISO 27001, FedRAMP, etc. But nowhere in the NERC CIP requirements or Rules of Procedure is there any mention of utilizing compliance with another organization’s standards as evidence of compliance with NERC CIP requirements. In fact, there’s widespread agreement among NERC enforcement staff members that reliance on “the work of others” (meaning other auditing bodies) is not acceptable for determining NERC CIP compliance.

Let’s look at Section 3 of CIP-003-8 Requirement R2 Attachment 1. That requires the NERC entity with low impact BCS to “Permit only necessary inbound and outbound electronic access as determined by the Responsible Entity for any communications…” that are between a BES Cyber System installed in the low impact asset and any system outside the asset. Of course, in most cases this means that the entity should deploy a firewall and make sure all open ports have a business justification. Again, this is something the cloud provider is without a doubt already doing for compliance with other standards, but there’s no way for the NERC entity to point to that fact as evidence that the CSP is in compliance with this section.

In fact, it seems that the only way the NERC entity with low impact BCS in the cloud can demonstrate compliance for the CSP is to get them to provide evidence that every physical or virtual system, in any data center, that happened to contain any part of any one of their BCS for even a few seconds during the 3-year audit period, is protected by a firewall that is properly maintained. Needless to say, that isn’t going to happen. If you ask the same question for the other parts of CIP-003-8 R1 and R2, you will find the same situation: there is no way for the NERC entity to provide compliance evidence on behalf of the CSP. Moreover, even if the CSP were willing to provide evidence themselves, in many cases it would be physically impossible.

Of course, this is very ironic, since if anything the CSP maintains a much higher level of security than any customer ever could – at least, as far as the requirements in CIP-003 are concerned.[i] But as we all know, that fact alone means very little. The only thing that matters, when it comes to proving compliance with any NERC Requirement, is the wording of the requirements and the Rules of Procedure.

However, it will probably be 5-6 years before the changes to the CIP requirements (and most likely the NERC Rules of Procedure as well) are in place that will make use of the cloud completely acceptable for low, medium and high impact BES assets. Do we just need to be patient and wait that long for this problem (and the more serious problems that effectively prohibit most cloud use by medium and high impact BES environments) to be fixed?

There is general agreement among NERC, the Regional Entities (including the auditors), the NERC entities themselves, the CSPs and other vendors (and even FERC staff members, although they’ll never confirm this for you) that 5-6 years is too long to wait. More and more software and service providers (especially security service providers) are announcing that they will either move exclusively to the cloud in a couple of years, or at least they will freeze development of their on-premises version and just develop new capabilities for the cloud version from now on. There are already impacts to grid security due to this situation. They will only continue to grow, with the result that grid reliability may itself be affected – and “reliability” is NERC’s middle name!

Fortunately, the NERC CTAG is doing more than organizing webinars. We have recently formed a sub-group that is starting to draft at least two NERC CMEP Practice Guides (CMEP stands for Compliance and Monitoring Enforcement Program), one on BCSI use in the cloud and the other on the meaning of “access control and monitoring” in the definition of Electronic Access Control or Monitoring System (EACMS).[ii] CMEP Practice Guides are intended to provide guidance to auditors regarding particular technical subjects; they aren’t meant to be interpretations of the NERC Reliability Standards. There are two auditor-focused NERC ERO committees that need to approve a CMEP Practice Guide, so it’s possible these could be approved in one year (this is just my guess)

I haven’t discussed this with the CTAG yet, but it seems like the issue with low impact systems in the cloud might also be addressed through the CMEP process, so maybe there’s some hope in this matter.

In any case, I don’t think any NERC entity with low impact BES Cyber Systems that are deployed or used in the cloud today should pull them out tomorrow because of what I’ve just written. There’s been no official NERC notification on this subject, and I doubt there will ever be a notification[iii], until the problem is fixed.

Are you a vendor of current or future cloud-based services or software that would like to figure out an appropriate strategy for selling to customers subject to NERC CIP compliance? Or are you a NERC entity that is struggling to understand what your current options are regarding cloud-based software and services? Please drop me an email so we can set up a time to discuss this!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] There are other areas of security in which the CSPs have work to do, for instance in vetting third parties who resell their cloud services. The new drafting team for cloud services should focus on those areas. I’m willing to stipulate that the CSPs are doing a good job of patch and configuration management, although there needs to be a way to document that fact without requiring a pointless “audit” of the CSP’s basic security practices.

[ii] A number of cloud based security services are currently not available to medium and high impact NERC entities because auditors believe those services meet the EACMS definition. Even worse, some heavily used security monitoring services that are delivered on premises today have already announced, or will soon, that their regularly updated versions will be delivered exclusively from the cloud. In fact, some services that may be needed to provide internal network security monitoring for CIP-015 compliance (which will be due in 3-4 years; don’t panic yet) may also be out of reach for NERC entities because of the current interpretation of “access monitoring” in the EACMS definition.

[iii] You may be surprised to learn that notification in this blog doesn’t take the place of official NERC notification, despite that fact that NERC almost bought the blog on April 1, 2015.