Friday, April 28, 2023

Progress to report on the NVD!

This morning, the SBOM Forum met with three members of NIST’s NVD team (including the leader of the team) to discuss problems we know about pertaining to the NVD (listed below). We discussed a small number of the most serious problems and – of course – didn’t reach any solutions during the hour. However, the Forum members were quite pleased with NIST’s concern about these problems (almost none of which they knew about), as well as their willingness to discuss solutions. In fact, we’re going to have a follow-up meeting in about a month; I suspect/hope there may be more after that – perhaps every month.

The NIST people immediately brought up the fact that NIST hasn’t received their share of funding from the Omnibus funding measure that was passed by Congress at the end of last year. This means all of NIST is just receiving enough money to keep the lights on. The NVD group has 25 people at full strength, but at the moment they only have 19 (I believe that’s the number they said) and can’t hire anyone else. This is really bare bones.

However, I think they were surprised at our response (in fact, I was a little surprised at it). There was broad support (among the 22 or so people on the call, including people from major software suppliers as well as companies and nonprofits involved with SBOMs) for the idea of private companies banding together to help NIST address the problems listed below. Of course, this assistance wouldn’t be with dollars, but with something even better: Help from some very knowledgeable people.

The interesting thing about the NVD’s problems is that they’ve all been solved repeatedly; there’s nothing that requires new technologies, massive investment, etc. They can mostly be solved with changes in policies and procedures and education (of stakeholders) about those changes.

We started to discuss having this meeting a couple of months ago; then, our focus was the naming problem (for which we outlined a solution – as far as the NVD goes – in this paper last September). However, it seems there are currently some serious infrastructure problems – made worse as we approach September of this year, when NIST was planning to end the ability to download a “mirror” of the NVD (which many organizations do now, some many times a day). After our conversation today, my guess is mirroring won’t be removed in September, although there are certainly other infrastructure issues that need to be addressed (again, none of them huge or involving new technologies).

All good news. I’ll keep you up to date with new developments.

 

Issues for the NVD – discussed with NIST 4/28/23

CPE issues raised in SBOM Forum’s Sept. 2023 document

1. If a supplier has never reported a single CVE as applicable to any of their products, there will be no CPE for them. It is likely that the great majority of suppliers who have never reported a single CVE didn’t do this because they have never found a vulnerability in their products, but rather because they never looked for them. However, if a user searches on the supplier’s name, they will receive the “There are 0 matching records” message.

The problem is that this same message will be received when there is a CPE for the supplier, but there are no CVEs reported for products that they make. This might possibly mean the supplier is very diligent about identifying and fixing vulnerabilities. However, since the message will be the same in both cases, there is no way to determine which is the case.

2. There is no error checking when a new CPE name is entered in the NVD. Therefore, if the CPE name that was originally created for the product does not exactly follow the specification as it was entered, a user who later searches for the same product and enters a properly specified CPE will receive the same message: “There are 0 matching records”.

In other words, when a user receives this message, they might interpret this to mean there is a valid CPE for the product they’re seeking, but a vulnerability (CVE) has never been identified for that product - i.e., it has a clean bill of health. However, in reality it would mean the CPE name was created improperly. Even worse, there might be many CVEs attributed to the off-spec CPE name, but without knowing the name that was actually created, the user will never be able to learn about those CVEs.

2a. If a user mis-spells the CPE name they are searching for in the search bar, but there is in fact a legitimate CPE name already, the user will once again get the “There are 0 matching records” error message. If the user doesn’t realize they mis-spelled the name, they might once again believe the CPE is there, but there are no vulnerabilities (CVEs) attributed to it.

3. When a product or supplier name has changed since a proprietary product was originally developed (because of a merger, acquisition, or corporate rebranding), the CPE name for the product may change as well, but with no link to the previous name. Thus, a user of the original product may not be able to learn about new vulnerabilities identified in it, unless they know the name of the current supplier as well as the current name for the product.

4. Supplier and product names can be written in many ways, such as “Microsoft™” and “Microsoft™ Inc.”, or “Microsoft™ Word” and “Microsoft Office™ Word”, etc. However, there is no easy way to distinguish the correct supplier or product name among a large number of query responses from the NVD.

5. Sometimes, a single product will have multiple CPE names in the NVD because they have been entered by different people, each making a different mistake. For this reason, it will be hard to decide which name is correct. Even worse, there may be no “correct” name, since each of the names may have CVEs entered for it. This is the case with OpenSSL in the NVD now (e.g., “OpenSSL” vs “OpenSSL_Framework”). Because there is no CPE name that contains all the OpenSSL vulnerabilities, the user needs to find vulnerabilities associated with each variation of the product's name. But how could they ever be sure they had identified all the CPEs that have ever been created for OpenSSL?

6. Often, a vulnerability will appear in one module of a library. However, because CPE names are not assigned based on an individual module, the user may not know which module is vulnerable, unless they read the full CVE report. Thus, if the vulnerable module is not installed in a software product they use, but other modules of the library are installed (meaning the library itself is listed as a component in an SBOM), the user may unnecessarily patch the vulnerability or perform other unnecessary mitigations[1].

7. The supplier of a product doesn’t determine the CPE name (even if they are a CNA). But if NIST makes a mistake in specifying the name, the supplier will hear about it from unhappy customers.

8. There’s no way for a supplier to “self-onboard” themselves to the NVD. They have to wait until there’s a new CVE that applies to them, then get a CNA (or MITRE) to assign a CPE for them.

9. To reduce or eliminate errors in CPE names, it would be good for the NVD to automate creation of CPEs, as CVE.org has done with creation of new CVEs.

10. It would be good if the NVD people just used CPEs created by the supplier, instead of inventing new ones (and often making mistakes).

Other NVD issues

1. Many organizations download a “mirror” of the entire NVD regularly; however, that capability will be removed starting in September 2023. It has been “replaced” by the API, but there’s already experience indicating that having multiple large NVD users within one organization may lead to serious bottlenecks.

2. When a user identifies the need for an update or correction to any data, they must use email to request it. While NIST usually responds to the email within a week, a more efficient system than email might allow updates to be made within a day and require much less human involvement with each update.

3. A lot of the historical data in the NVD contains errors. These need to be cleaned up.

4. Because of the coming tsunami of SBOM-based component vulnerability searches, as well as increased interest in software supply chain vulnerability management in general, it is important that the NVD be easily scalable. While the SBOM Forum members don’t have direct knowledge of whether and how the NVD can scale, we do wish to emphasize that the NVD should be considered critical infrastructure and treated as such.

5. There are single points of failure in the NVD, including the CPE dictionary. There is a serious question how well this will withstand the millions of daily requests that will start hitting it in September.

6. In the NVD CVE downloads, one cannot determine which products referenced by CPEs contain the vulnerability.   In CVE-2021-44228, for example, there are many dozens of CPEs from various vendors such as Cisco, ServiceNow, Bentley, percussion and Net App, to name some.   However, from all the CPE entries, there is no way to determine which of the many dozens of products referenced by CPEs, contains the vulnerability (in this case Apache Log4j).   Often, the entry is first (Typically Configuration 1) but that is not always the case.  The request is that the NVD downloads indicate which of the CPEs directly contains the vulnerable code.

7. In addition to general scalability considerations, it is also important to consider the international dimension: The EU, UK, Japan and other countries are currently consulting about improving software security for their nations. They will need a single reliable and trustworthy source of vulnerability information, as opposed to having separate nation-backed solutions. Having separate solutions will make tooling inefficient and will likely lead to misleading results in many cases. 

Because the NVD is already widely used worldwide, it is the best candidate for this solution. Of course, the cost of implementing a worldwide solution needs to be borne by users (in both private and public sectors) in all participating countries. The fact that users in many countries will be contributing to the cost of developing and maintaining a central vulnerability database, as opposed to users in each country individually developing and maintaining their own national database, will undoubtedly allow a much higher level of investment in the central database, resulting in greater performance and functionality for all.

8. In the NVD CVE downloads, the CVSS score for each of the CPE entries should be made available if the vendor supplies that information.   Thus, for the Log4j CVE-2021-44228, there could be the overall CVSS score (10.0 Critical, in this case) along with the CVSS score for CPEs referencing Siemens products, Cisco Products, etc.   CVSS scores from a vendor (vs NIST or other sources) should be identified as such, so there is no confusion.   This will greatly aid organizations reviewing NVD to determine the urgency of applying published remediation.

9. There is a question whether the NVD query structure could be optimized to make it more efficient: i.e., to have a greater likelihood of producing positive results.

10. Greater transparency is needed. All too often, the NVD mailing list is only informed of an upcoming change when it’s already been decided on, or even implemented.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[1] This probably occurred often with log4shell. That vulnerability was only present in the log4core module of the log4j library, yet the CPE name for a library always applied to the complete library, not any of the modules.

Wednesday, April 19, 2023

It’s time to retire the candy bar


Ever since SBOMs were invented by Thomas Edison (I think I’m correct in that statement), almost every introductory presentation has focused on the analogy to a list of ingredients in some processed food, often a candy bar.

The narrative always goes something like, “Just like you want to know the ingredients of a candy bar to learn if there’s something listed that could present risks to your physical health, so you want to learn about the ingredients (components) of the software you use, to see if they might present risks to your cyber health. If you find something like that, you might ask the supplier to mitigate that risk in some way by replacing a risky component. Or you might decide that continuing to use this software is too risky and you’ll find another supplier.”

There’s another implication to this narrative: that the supplier is negligent and acting against your best interests if they don’t replace a component that has a serious vulnerability – or even if they don’t regularly replace components that have reached a certain age. In fact, I would bet there are contracts today that include language requiring suppliers to take actions like these when certain conditions are met.

However, considerations like these are based on an assumption which I unconsciously made until last Friday: that the components of a software product or intelligent device are for the most part independent of each other. In other words, as long as a supplier can find a component of roughly equivalent functionality but which poses less of a risk, or perhaps develop the required code themselves, they should be expected, or even required, to replace the problematic component.

However, last Friday at our weekly meeting of the SBOM Forum, we discussed a problem that is well known among software developers, although it hasn’t been discussed at all (as far as I remember) in the NTIA or CISA meetings on SBOMs. This is the problem of “tight coupling” between components, meaning it is very hard to replace one component without at the same time replacing others – in some cases, many others. There can be various technical reasons for this, but the effect of all of them is the same.

This is especially the case for big, complex systems that have been built up over years, if not decades. They can have millions of lines of code and many thousands of components. They can literally get to the point where it’s almost impossible to replace a single component in them.

What are these big, complex systems? Hopefully, they’re not very important, so it’s not a big deal if the supplier can’t replace components anymore, right? Unfortunately, it’s just the opposite. Here are several examples of systems like this (without saying any particular system suffers from this problem):

1.      Huge military systems

2.      Energy management systems (EMS), which some electric utilities (and Independent System Operators/Regional Transmission Organizations) utilize for the most crucial function on the grid: balancing power supply with load in a certain area like a city (or 13 states, as in the case of the PJM RTO). This has to be done in real time. Unfortunately, with power, having too much supply at any moment is as bad as having too little. You can’t just put some power on the shelf for a time when it’s needed more, and it doesn’t evaporate. Both a surplus and a deficit of power supply (or an excess or deficiency of demand) will throw the grid out of synch, depending on how long the disturbance lasts (and we’re talking a few seconds at most).

3.      Electronic medical records (EMR) systems, which run an entire hospital. It’s nice that the FDA is going to require SBOMs for medical devices in hospitals, but if I – who has only limited experience with hospital systems, and that was years ago – were to point to the biggest source of cyber risk in a hospital today, I would point to the EMR system. Yes, an attack on an infusion pump might kill the person attached to it. But an attack on the EMR system - usually through ransomware - might disrupt an entire hospital, resulting in deaths. This has happened in Germany (with ransomware as the cause, not out-of-date components) and I’m told it’s almost certainly already happened in the US.

Thus, the only way a supplier of one of these systems can address a component vulnerability is to patch it (assuming the component vulnerability is exploitable in the product). Of course, patching a new vulnerability is something suppliers do all the time. What’s important in the case of tightly coupled systems is that, since components can’t usually be replaced or upgraded, the supplier will have to patch old vulnerabilities in components. In a less tightly coupled system, these would probably be fixed through replacing the component.

The practice of patching old vulnerabilities in old components (or in old products in general) is called backporting. A lot of suppliers have to do this. But having to do a lot of backporting can pose a serious perception problem for the supplier, when they start to release SBOMs.

Why does backporting patches pose a problem? Because, when the supplier releases an SBOM for a product like this, users (or a service provider or tool acting on their behalf) can be expected to look up the components in the NVD or another vulnerability database. They will probably be shocked when they find such old components, but they will be even more shocked when they find there are so many old vulnerabilities that apply to those old components, which have never been patched by the component supplier. The supplier may have removed the vulnerability with a backported patch, but that won’t show up when a tool looks up the component in the NVD.

Since the logical steps may not be clear, I’ll list them:

1.      Say a tightly coupled software product ABC includes component XYZ version 1.4. V1.4 was released by the component supplier (which might be an open source project) in 2017.

2.      Shortly after the component was released, the supplier reported to the NVD that CVE-2017-12345 was identified in XYZ v1.4. The supplier later released a new version of the component, v1.5, in which the vulnerability was fixed.

3.      Because ABC is tightly coupled, its supplier has not been able to upgrade or replace component XYZ v1.4. Instead, they patched CVE-2017-12345 in ABC.

4.      When ABC’s supplier starts to release SBOMs for their products, customers will notice that a) component XYZ is way behind the current version of the component, and b) the old CVE-2017-12345 is still found in XYZ v1.4 (since the NVD doesn’t know whether or not a particular vulnerable component has been patched in a particular product. In fact, the NVD doesn’t know which products include which components).

5.      The customers start complaining to ABC’s supplier about this old vulnerability. They have to be told one-by-one that CVE-2017-12345 isn’t exploitable in ABC, because the supplier backported a patch for it years ago.

Of course, if the above sequence were to be a rare occurrence, it might not pose a problem for a supplier. But, given that a tightly coupled product will likely contain a large number of backported patches, this won’t be a rare occurrence. In fact, a security person from a large service provider told the SBOM Forum last year that the fact that their main system contains a lot of backported patches was the number one reason they were holding back from releasing SBOMs for their products (fortunately, they believe they have solved this problem, and they are now getting ready to release SBOMs).

Thus, the main problem with a tightly coupled system is a perception problem, not a security one, since the software is presumably safe because of the backported patches. The problem becomes visible when the supplier starts to distribute SBOMs.

How can this perception problem be solved? Through communication, as is the case with most perception problems. Of course, the supplier can tell their customers about the problem through emailed PDFs, but can they also include the information in an SBOM? Since the SBOM itself led to the perception problem, can the problem also be corrected using an SBOM?

As it turns out, it can. The CycloneDX SBOM format includes a “pedigree” field, which can be used to record these changes. Note that, in order for the contents of this field to be automatically acted upon, the tool used to ingest the SBOM will need to include rules on how to react to the contents of the pedigree field. Since those rules are likely to be specific to the organization involved, they may always require customer development.[i]

In SPDX, this is done through Relationships, specifically using these two fields:

PATCH_FORIs to be used when SPDXRef-A is a patch file for (to be applied to) SPDXRef-B.A SOURCE file foo.diff is a patch file for SOURCE file foo.c.
PATCH_APPLIEDIs to be used when SPDXRef-A is a patch file that has been applied to SPDXRef-B.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Steve Springett, Chair of the CycloneDX SBOM Standard, Core Working Group, pointed out that CycloneDX also includes the OBOM (Operational bill of materials) format, which can be created separately from or together with SBOM information; he said that the backporting information could also be handled in OBOM, although I didn’t try to verify that.

Saturday, April 15, 2023

Will they hang the cowboy?

Last month, after the National Cybersecurity Strategy was released, I published this post on it. I didn’t object to the document in general, but rather one specific section, which seeks to make software suppliers liable for breaches by assuming they’re always at fault - although they could receive a “get out of jail free” card if they could show they followed NIST’s Secure Software Development Framework (SSDF). Never mind that there are plenty of other ways that the supplier could be liable for a breach, that have nothing to do with development practices. I elaborated on what I said in the first post in this second post.

To be honest, after those posts, and a third post based on conversations on LinkedIn with Dale Peterson, I put the issue out of my mind, since I didn’t hear anything more about it. I thought it was likely that the idea had already been consigned to the trash heap where it belongs.

However, I was disappointed to read in NextGov this week that Jen Easterly, CISA Director, and Kemba Walden, Acting National Cyber Director, said the following at a recent meeting:

“’We can't allow the end user to be held liable for flaws in code,’ Walden said. “It's just that simple.’ Easterly echoed this stance, saying that the design of secure software will have to pivot at a market level to incentivize the manufacturing of systems created with a safety-first approach.”

In my second post, I had described a thought experiment, in which a trial judge is determining liability for a devasting ransomware attack and learns that the vulnerability that enabled the breach was there because of a mistake made by the supplier in the development process (furthermore, I stipulated that this breach could have been avoided if they had followed the SSDF better). It would seem this is a textbook example of what Easterly and Walden are talking about, right? If the trial ended at that point, I’d tell the jailhouse personnel to start readying the gallows for the unfortunate defendant.

However, I then imagined that the defendant would get to state their case - under a quaint doctrine that liability should be determined by a judge or jury, not by some random person in DHS or the White House, and that both sides should be allowed to present their case. But that doctrine is sooooo 20th Century. 😊

In my thought experiment, the defendant (the developer) points out that they discovered the flaw about a year after the vulnerable product had shipped; they immediately patched the flaw, but the company that was breached never applied the patch. In fact, they didn’t apply any patches that came out over the next three years. Since this supplier provides cumulative patches, all the company had to do was apply any one of those patches and the vulnerability would have been closed. The developer also showed evidence that other customers of the same product, that had applied the patch, were never breached, despite indications that they were attacked by the same ransomware group.

It seems ridiculous that this idea should even get this far without dying a well-deserved death, but I would also think that considerations like my thought experiment would finally put it to rest. However, it seems someone did bring up a consideration like mine, and either Walden or Easterly said the following during the meeting (unfortunately, I read at least one other story on this meeting, and I don’t know where I saw this): Software suppliers, instead of fixing problems in their code before shipping it, just wait for others to discover vulnerabilities (sometimes the hard way, by getting hacked). Then, the suppliers deluge their customers with patches – and if the customer has missed a single patch, the supplier will claim any breach isn’t their fault. Oh, the perfidy!

The only problem with this idea is it makes no sense if you know anything about vulnerabilities. It seems to assume that:

1.      The number of software vulnerabilities in the world never changes (in fact, about 25,000 new CVEs were identified in 2022).

2.      One goal of a secure development process is to make sure that every one of this fixed set of vulnerabilities is patched and therefore not exploitable.

3.      This means that no security patches will be needed after a software product ships. Ergo, if a supplier issues a lot of patches for their products, this just means they’re lazy. They deserve to be liable for a breach, since they didn’t have the foresight to patch every possible future vulnerability before they shipped their product.

I hope nobody reading this post will need an explanation of why the above is pure nonsense. But if you do, I have one word for you: log4shell.

What surprises me most is that DHS, and especially CISA, is filled with people who could have corrected the misperceptions behind this section of the strategy document. Why didn’t they?

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Wednesday, April 12, 2023

What about attestations?


I have come up with Alrich’s Law of Supply Chain Cybersecurity Innovation: No matter what you dream up as something that would be good to have in the world of supply chain cybersecurity, Steve Springett[i] has already dreamed it up and is in the process of implementing it in CycloneDX. It’s like I had made climbing the seven highest mountains in the world my life’s goal (you can tell I’m joking about this!) and as I summited each one of them, I found Steve sitting in a camp chair, pouring coffee from a thermos and enjoying the view.

So I wasn’t surprised when Steve recently posted on LinkedIn about attestations. He said the OWASP CycloneDX project will be adding to the already impressive list of capabilities built on the CycloneDX framework by providing the capability for a Bill of Attestations (and to see a complete list of BOMs currently supported, about to be supported in CycloneDX 1.5, which is due out this quarter, or planned for future versions of CycloneDX, look at this slide deck he recently posted on LinkedIn). The point is that organizations need to make attestations all the time to regulatory bodies, customers and others. Wouldn’t it be nice, both for the attestor and the recipient of the attestation, if there were a machine-readable format for providing attestations?[ii]

Soon there will be. And you can help Steve develop it as well! Details in the LinkedIn post.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[ii] Steve made sure to point out that the original idea for this came from Jeff Williams, founder of Contrast Security and originator of the OWASP Top 10.

Thursday, April 6, 2023

Where are we with VEX? For that matter, where are we with SBOMs?


Last week, Dale Peterson released a podcast based on his interview with Matt Wyckhouse of Finite State. They had a good discussion about the state (no pun intended) of SBOMs and VEX, especially in light of the “SBOM Challenge” at the recent S4 conference, in which Finite State participated.

Regarding the state of SBOMs, I think what Matt said was close to what I would have said: SBOMs have already proved wildly successful among developers (in large part due to the efforts of the NTIA Software Component Transparency Initiative, which was led by Allan Friedman). However, SBOMs are hardly being requested at all by end users (i.e. organizations whose primary business isn’t developing software. And yes, there are still at least a few of those organizations left. Software hasn’t quite “eaten the world”, much though it might seem like that to those of us who make our living in some realm related to software).

Of course, Dale already knew this, but what he said had struck him during the SBOM Challenge was that at least SBOMs are making progress. On the other hand, VEX hasn’t gotten anywhere today – either among developers or end users (I’ll admit I’m reading a little into what Dale said, but I don’t think I’m changing his meaning). VEX isn’t being widely used anywhere (while SBOMs are being heavily used in at least one industry that I know of: the German auto OEMs, who are receiving and acting on SBOMs provided by big component manufacturers like Bosch. But that use case is entirely based on open source license management concerns, not cybersecurity. Moreover, I think the methods used to facilitate this exchange might be challenged by the FTC in the US, as a possible violation of US antitrust law. So this use case probably couldn’t be replicated here).

In fact, it became clear during the Challenge that different parties – i.e., Idaho National Labs, which organized the Challenge, and the five or so participating vendors - had very different ideas about what constituted a VEX, and there wasn’t any clear documentation on which they could all rely. If that’s the situation (which it is), then VEX won’t be going anywhere until this problem is solved.

The worst part of this situation is that the lack of understanding of VEX doesn’t just inhibit distribution and use of VEX documents. IMHO, this lack of understanding is most likely one of the two main reasons why SBOMs themselves aren’t being distributed or used by non-developer organizations (I discussed this situation in this post and in the document linked in the post. Note that I think there’s a third problem holding back SBOMs – the lack of easy-to-use commercially-supported consumption tooling. However, that problem is itself mostly caused by the other two problems: VEXual ambiguity and the “naming problem” – so they need to be addressed first).

There was another statement I heard last week that was further indication that the lack of VEX clarity is holding back SBOMs themselves. In one of the CISA SBOM workgroup meetings, someone who works for one of the few largest software suppliers in the world, and who I believe is leading that supplier’s SBOM efforts, hinted that his organization was debating whether they could release SBOMs “without VEX” (of course, like some other major suppliers, that organization is willing to provide SBOMs for some products to customers who ask for them, but they’re not “distributing” them and certainly not doing that regularly – which is required for SBOMs to really be used, IMO),.

My guess is that a lot of software suppliers are debating the same question, and they’re all coming up with the same answer: Probably over 95% of component vulnerabilities, identified through parsing an SBOM and looking up the components in a vulnerability database like the NVD, aren’t exploitable in the product itself. This means that, if they release SBOMs without providing their customers with information (in a machine readable form, of course) that separates exploitable from non-exploitable component vulnerabilities, they will end up with a) a support meltdown, as customers jam the help lines and email by the thousands to demand the status of this or that non-exploitable component vulnerability, and b) a lot of unhappy customers, who don’t appreciate being allowed to spend hours chasing down false positive vulnerabilities, only to be told later that they needn’t have bothered.

As I’ve said before, “No VEXes, no SBOMs (at least, SBOMs distributed to end users).” We have to figure out a) what the problem or problems are that need to be addressed, regarding false positive component vulnerability notifications, then b) how they can be addressed (it’s fairly clear by now that simply drawing up specs for VEX documents and congratulating ourselves on the great progress we’re making isn’t the answer to our problems).

In the near future, I plan to put up a high-level post outlining my ideas on this topic, although I’ve stated most of them in some form or other in past posts. A detailed discussion of this topic needs a book.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Sunday, April 2, 2023

NERC’s new supply chain cyber risk management guidelines

I’ve been part of the NERC[i] Supply Chain Working Group (SCWG) since they started up about five years ago. In 2019, the group developed about seven short guidelines on supply chain cybersecurity risk management; these are all being updated now (plus, Tobias Whitney of Fortress Information Security is leading development of a new document on Procurement Sourcing, which looks to be quite interesting). I led the teams that developed two of these guidelines, as well as the teams that updated both of them last year.

Both guidelines have recently received final approval from the NERC Reliability and Security Technical Committee and have been posted on NERC’s website. The documents are Supply Chain Cybersecurity Risk Management Lifecycle and Vendor Risk Management Lifecycle. Leading the groups that developed and revised both of these was a great experience; I think both documents are worth reading by anybody involved in supply chain cybersecurity for critical infrastructure. If that fits your job description, you may want to review both of these. A few points about them:

First, don’t be fooled by the fact that they’re NERC documents. There is almost nothing in them that just applies to the electric power industry. Since NERC is entirely focused on operations, all these documents are appropriate to what I call “OT-focused” industries: gas pipelines, oil refineries, power generation and transmission, pulp and paper mills, manufacturing of all types, etc. In all these industries, Job Number One is protecting the availability of the process by which the industry makes its money.

“IT-focused” industries are those for which protection of the confidentiality and integrity of data is the most important consideration, such as banking, insurance, consulting, most government agencies, etc. While there are many supply chain cybersecurity considerations that apply to both groups (e.g. they both need to ensure the integrity and availability of their network infrastructure devices), there are other considerations that mostly apply to one or the other (e.g. the vendor’s protection of customer data is a concern mainly for IT-focused industries, since often OT-focused industries will not provide any operational data at all to their vendors).

Also, neither of these documents provides guidance on compliance with the NERC CIP standards, including NERC CIP-013, the standard for supply chain cybersecurity risk management. That being said, CIP-013 R1 requires the NERC entity to develop a good supply chain cybersecurity risk management plan for their OT systems, and both of these documents point to elements that might be included in such a plan.

Last, I want to point out that there are a few pages of boilerplate NERC language in both documents, which you might or might not care to read (the Preface and Preamble sections at the beginning of both documents, and the Metrics section at the end).

I hope you enjoy these documents, and I’d love to hear any comments you have on either one.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] North American Electric Reliability Corporation, the organization that develops and audits the NERC Reliability and Security standards, including the NERC CIP (Critical Infrastructure Protection) standards. NERC is the Electric Reliability Organization chosen by FERC, the Federal Energy Regulatory Commission, in accordance with Section 215 of the Electric Power Act of 2005. FERC provides the regulatory “muscle” behind the NERC standards.