Thursday, October 2, 2025

NERC’s BCSI webinar was what I feared it would be

Note from Tom:

I have moved to Substack as my primary blgging platform. Please read the post here and join me as either a free or paid subscriber.

 

Tuesday, September 30, 2025

Is fully automated vulnerability management even possible?

Note from Tom:

I have moved to Substack as my primary blog platform. If you want to see all my new posts, as well as my 1200+ legacy posts dating from 2013, please support me by becoming a paid subscriber to my Substack blog. The cost is $30 a year. Thanks!

 

A few years ago, I came to realize two things:

a.      There are a huge number of (now over 300,000) identified CVEs; and

b.      Some organizations have literally millions of intelligent devices that utilize software and firmware that can harbor vulnerabilities.

Thus, the only way for all but the smallest end user organizations to manage software vulnerabilities in all those devices is to have a fully automated process. Below are the high-level tasks[i] that I believe constitute vulnerability management:

One-time tasks (repeated periodically):

1.      Develop a CVE[ii] risk score based on information about the CVE, including a) presence or absence of the CVE in CISA’s KEV Catalog, b) CVSS Base score[iii], c) current EPSS score[iv], d) values of components of the CVSS vector string, etc. The organization needs to decide how each of these scores (and any other information deemed important by the organization, such as a measure of the criticality of a particular asset to the organization’s mission, if they have one) is incorporated into the CVE risk score.

2.      Identify a score that is the “do not fix” threshold. The organization will normally not apply a patch for a vulnerability whose risk score is below that threshold, although there will always be cases in which the vulnerability needs to be patched regardless of its score.

3.      Set up a database in which patches received from software or firmware providers are cross-referenced with the products that are reported to be vulnerable to that CVE. For example, if the record for CVE-2025-12345 lists product A version 2.0 and product B version 3.7 as vulnerable to the CVE, the user should be able to search the database for both product/versions and learn that they are vulnerable to that CVE. Of course, this capability is built into most asset and vulnerability management tools.

Ongoing tasks (repeated daily, if possible)

1.      To the extent possible, identify the product name and version number of every software and firmware product in use by the organization.

2.      To the extent possible, identify a machine readable software identifier – usually CPE or PURL – for each product/version in the inventory

3.      Search the National Vulnerability Database (NVD) and other vulnerability databases for vulnerabilities that apply to any of these product/versions.

4.      Assign a risk score to each vulnerability identified, based on the previously agreed-upon formula.

5.      Discard any CVE whose risk score is below the “do not fix” threshold.

6.      Match each patch received from a software producer with the product/version(s) in the inventory, to which that patch applies.

7.      Prioritize application of applicable patches according to the amount of risk mitigated by each patch, as measured by its risk score.

8.      Apply patches in order of priority, if possible.

9.      If a patch cannot be applied when it normally would be, determine what alternate mitigation(s) for the vulnerability or vulnerabilities addressed in the patch might be implemented. Leave those mitigations in place until the patch can be applied.

Until fairly recently (probably last year), I thought that developing a tool to perform all these steps automatically – i.e., by just setting some parameters and hitting “Start” - shouldn’t be hard; I was quite surprised when I realized there are no such tools, although there are lots of excellent tools that automate parts of this process (for example, the open source Dependency Track is used over 25 million times a day to look up an open source component shown in a software bill of materials (SBOM) in the Sonatype OSS Index vulnerability database). I was even more surprised to learn there doesn’t seem to be anyone working feverishly to be the first to produce such a tool.

At first, I attributed this lack of activity to security tool vendors not being aware of how important vulnerability management is. However, after at least a decade of major attacks that were enabled by unpatched software vulnerabilities - especially ransomware attacks - it became clear that there must be some other reason for this lack of tools.

Now the reason is clear as day to me: There are major obstacles, especially regarding data quality, that inhibit almost every step of the vulnerability management process. Of course, data quality is a problem in lots of disciplines; for example, economic data are often flawed and are never completely reliable (I used to work for an econometric forecasting firm where the unofficial motto was “We show our forecast values to six significant digits just to show we have a sense of humor”).

However, the data quality problem in vulnerability management is of such a magnitude that in many cases, trying to completely automate vulnerability management would most likely leave an organization thinking they were much better protected than they really were, due to all the false negative vulnerability information they would receive. Let’s look at four of the above vulnerability management tasks, and consider just one of the obstacles each task might face:

One-time tasks:

Task 1: All four of the data sources that I listed in this task (with the possible exception of presence in the KEV Catalog) have been the subject of lots of articles and blog posts stating that they are worthless for determining the true risk posed by a software vulnerability. That doesn’t mean that CVSS, EPSS, and KEV are in fact worthless; in fact, it’s certain that they have value if they’re utilized properly. However, an automated tool is unlikely to appreciate the nuance of “proper” utilization, meaning it’s not a great idea to utilize these to measure risk in an automated tool.

But what other risk data are publicly available and easy to access? Not much, so even though lots of security professionals understand that for example CVSS scores don’t give a true indication of risk, they stick with them as their measure of risk because nothing better is readily available. It’s like the man who walks out of his house at night and sees his neighbor on his hands and knees under the streetlight, searching for something:

Man: “What are you looking for?”

Neighbor: “My car keys.”

Man: “Where did you have them last?”

Neighbor (gesturing toward dark front lawn): “Over there.”

Man: “So, why are you looking for them here?”

Neighbor: “The light’s better here.”

 

Task 2: There is no good way to calculate the optimal “do not fix” threshold, since there’s no way to assign measurable values to the costs and benefits of using any particular value as the threshold, let alone comparing multiple values.

I suspect that most organizations say (in effect), “Our budget for patching vulnerabilities is $X. If we try to patch every vulnerability we receive (equivalent to setting the threshold at zero) for all of the software products we utilize, we will spend some huge multiple of that. On the other hand, if we set the threshold at the maximum value (meaning we won’t apply any patch we receive), we will greatly increase our likelihood of being hacked. What’s the threshold value that will allow us to stay within our budget, yet still apply what an auditor will believe is a reasonable percentage of the patches we receive?”

This reasoning makes sense in the world in which we live, although it doesn’t make sense in an ideal world in which cybersecurity measures need to be as comprehensive as possible, cost be damned. But what organization on the planet – outside of, perhaps, the military or a nuclear safety agency – could possibly even consider following the latter course?

Ongoing tasks:

Task 2: I have written a lot of blog posts on the two most widely used machine-readable software identifiers: CPE and PURL. While PURL is a very accurate identifier, it currently can only identify open source software distributed through package managers – which is not how most government and private organizations obtain the software they use. CPE, on the other hand, is a very unreliable identifier, due in part to reasons identified on pages 4-6 of this white paper by the OWASP SBOM Forum; however, it does identify commercial software.

Thus, for commercial software, there is no reliable machine-readable identifier (although there is a proposal on the table to extend PURL to cover commercial as well as open source software). This means that looking up a commercial product in the NVD, or another database based on the NVD, is unlikely to display a CVE if the CPE included in the CVE record doesn’t exactly match the one being searched for – even if both CPEs refer to exactly the same product. In other words, searching for vulnerabilities in the NVD yields an unacceptable number of false negative findings.

Task 3: Besides the problem with CPE just mentioned, there is an even greater source of unreliability when it comes to vulnerability databases based on the CPE identifier (primarily the NVD, of course). The additional unreliability is due to:

a.      As stated above, to locate a software product in the NVD, a fully automated process requires that the CPE number entered exactly match the CPE number of that product in the NVD.

b.      As I’ve just pointed out, CPE is an inherently unreliable identifier. This is primarily because of two fields in the CPE spec: “product” and “vendor”. Both products and vendors have many names at different times and different contexts. A friend of mine once asked people who worked for Microsoft the name of the company they worked for; she received over 20 different answers.

c.      The only way for a fully automated process to find a product in the NVD is to create a CPE name that follows the CPE specification, then search on that name. CPE names are originally created by the NVD, which employs contractors to perform this task (the NVD is part of NIST, which is part of the Department of Commerce). When they create a CPE name (which currently is for only about half of new CVE records), they have to make choices like whether the vendor name should be “Microsoft”, “Microsoft Europe”, “Microsoft, Inc.”, “Microsoft Inc.”, etc. Using any one of these options will produce a different CPE name from all the others. The NVD has no official guide to making these choices, so the contractor just takes their best guess regarding which to include in a particular CPE name.

d.      Thus, when an automated tool searches for a Microsoft product in the NVD or one of the other databases based on the NVD, it needs to guess which choice the contractor made for the vendor name. Of course, if there are one hundred options for Microsoft’s name (and I’m sure there are many thousands, when you consider companies Microsoft acquired, software versions in other languages, name changes for marketing purposes, etc.), the tool will have only a 1-in-100 likelihood of making the right choice.

e.      Even worse, when the searcher’s CPE name for the product doesn’t match the one the NVD contractor created (perhaps years ago), the searcher will receive the message, “There are 0 matching records”. This is the same message the searcher would receive if they had guessed the correct CPE name, but no vulnerabilities had been reported for that product. Of course, most people will take the more favorable interpretation and assume that no vulnerabilities have been reported in the product. Because of this, they may not apply a patch issued for the product, since they think it's not needed. This is probably the biggest danger posed by false positive vulnerability findings.

What’s the answer to the question in the title of this post? Just based on the obstacles I’ve pointed out in this post, the answer is clearly “No”. There are a lot more obstacles to discuss in future posts. 

 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com or comment on this blog’s Substack community chat.

I’m now in the training business! See this post for more information.

[i] There could be a lot more steps in this list, depending on what a vulnerability management program applies to.

[ii] There are other types of vulnerabilities than CVE, including OSV, GitHub Security Advisories (GHSA), CISA’s ICS Advisories (ICSA), etc. However, there are far fewer of each of these other types than there are CVEs; plus, the other vulnerability types are often directly mapped to a CVE, meaning that a comprehensive CVE management program is likely to include them anyway.

[iii] The CVSS score is composed of three Metric Groups: Base, Temporal and Environmental. The CVSS scores published in the NVD just include the Base group. Temporal group metrics change over time, while Environmental metrics tailor the score to a specific user's context. If an organization wishes to use the latter two metric groups, they will normally have to create and include them in the CVSS scores themselves.

[iv] Since the EPSS scores for all 300,000+ CVE records are updated daily by FIRST.org, it is always a good idea to use the most recent score available.

Friday, September 26, 2025

Upcoming webinars on NERC CIP and Vulnerability Management

Since my new Substack blog is off to a good start and most of my readers have switched to reading posts there, starting on October 15 I will no longer put up new posts on this site (Blogspot). If you are a subscriber to this site, you have been automatically provided a free subscription on Blogspot; you should be receiving an email from both sites every time I put up a new post. As a way to show your support for me, and also to allow you to access the 1200+ legacy posts (written between 2013 and this past August) on the Substack site, please consider upgrading your free subscription to a paid one for $30 per year, or a founding subscription for $100 the first year.

If you aren't now receiving the posts by email from Substack, that means you're not a subscriber there. Please go to the link above and sign up as a free, paid or founding subscriber.


I have recently signed up as a Speaker for the fast-growing compliancewebinars.com site; I will be delivering live and recorded webinars on NERC CIP and on vulnerability management (and perhaps other topics in the future). Here is how it works:

1.     I have initially committed to delivering ten different webinar topics, which are listed below, with a link for more information and to sign up. A webinar is tentatively scheduled for each of those topics. All webinars are 90 minutes long, including 15 minutes of Q&A.

2.     At all the links below, you can enroll in the webinar described at the link, delivered at the scheduled time. You can sign up as an individual or as a group (multiple people from one organization).

3.     The members of a group (usually a company) will be able to attend a webinar from different connections (i.e., they don’t all have to be gathered around one screen).

4.     Each webinar listed below is currently scheduled, but it will only be presented if there are three or more attendees signed up. Note that a group counts as one attendee. So, if you have a group of four or more that wants to attend a webinar, it would be smart to have two members of the group sign up as individuals. That way, the webinar will definitely be held.

5.     Of course, any money you have paid will be refunded if the webinar isn’t held, although you will also be offered a voucher good for any other webinar (presented by any Speaker, not just me) over the next year.

6.     You can also sign up for a link to the recording of a webinar. The link will be for you only, but you can view the recording as many times as you want.  Another option is to sign up for a “Combo” of both the live webinar and the recording.

7.     When a group signs up for a recording, all members of the group can view it through a link with a password. The password will expire in six months.

I hope to see you at one of the webinars! Please email me if you have any questions.

 

Webinars on NERC CIP:

Introduction to NERC CIP

History and Future of NERC CIP

NERC CIP-013, the Supply Chain Cyber Risk Management Standard

NERC CIP and the Cloud

NERC CIP-011, BCSI and the Information Protection Program

Introduction to the North American Electric Reliability Corporation (NERC)

The six NERC Regional Entities (REs)

Introduction to the Federal Energy Regulatory Commission (FERC)

 

Webinars on Vulnerability Management:

What is CVE?

What is the National Vulnerability Database (NVD)?

 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com or comment on this blog’s Substack community chat.

Tuesday, September 23, 2025

FERC orders major changes in CIP-013

 

Note from Tom:

I have moved to Substack as my primary blog platform. If you want to see all my new posts, as well as my 1200+ legacy posts dating from 2013, please support me by becoming a paid subscriber to my Substack blog. The cost is $30 a year. Thanks!

 

Last Thursday, FERC released new orders related to CIP-013 and CIP-003 respectively. I’ll discuss the CIP-003 order in a future post. Before I discuss the CIP-013 order, which is officially a Final Rule, here’s some important background information:

1.      FERC ordered NERC to develop a “supply chain cyber security risk management standard” in July 2016. They gave NERC only one year to develop and fully approve the new standard. Since just approval of a new CIP standard usually requires a full year (including four ballots by the NERC Ballot Body, plus a comment period of more than a month between each pair of ballots), you can see this is lightning fast by NERC standards. FERC did this because they could see that supply chain security attacks – of which there had been very few at that time – would soon rapidly increase, both in numbers and magnitude.

2.      FERC turned out to be right about the attacks (think SolarWinds, Kaseya, XY Utils, and more). However, to meet FERC’s unrealistic one-year goal, the NERC Standards Drafting Team (SDT) had to make CIP-013-1 as bland as possible, so they could obtain the necessary expedited approval by the NERC Ballot Body.

3.      CIP-013-1 became enforceable on October 1, 2020. FERC auditors, who reported in 2023 on CIP-013 audits they had led, stated (on page 17), “While the requirement language obligates entities to develop a supply chain risk management plan with various characteristics, it does not mandate that the plan apply to contracts that were in effect at the effective date of the Reliability Standard.” Since the most important OT contracts are almost always multi-year, this meant that some of the most important supply chain risks were originally deemed by many NERC entities to be out of scope for CIP-013-1 compliance, even though there was nothing in the wording of CIP-013-1 Requirement R1 suggesting that risks from existing suppliers should (or even could) be ignored.

4.      FERC’s 2023 report continued, “Additionally, the standard does not require the plan to incorporate responses to the risks identified by the entity in the plan except in limited circumstances (Requirement Part 1.4.2[i]).” This was the most mystifying omission in CIP-013-1: the fact that it requires the entity to “identify and assess” supply chain risks, but not to do anything about the risks they’ve identified. It’s like you saw a runaway truck barreling toward you in the street but you decided not to move out of the way, since the law doesn’t require you to do that. I wrote about this issue in several posts in 2019 and 2020, culminating in this one.

FERC’s NOPR

In September 2024, FERC issued a Notice of Proposed Rulemaking (NOPR) that laid out their concerns about CIP-013-1 and CIP-013-2 (version 2 added EACMS and PACS to the scope of CIP-013-1 but left the rest of the standard unchanged). I wrote this post about the NOPR. Since it’s a long post (surprise, surprise!), I recently boldfaced three paragraphs that I consider especially important. However, the whole post is still quite relevant, if you have time to read it.

In the NOPR, FERC identified two problems they were considering ordering NERC to correct. The last sentence of paragraph 2 on page 3 describes these problems as, “(A) sufficiency of responsible entities’ SCRM[ii] plans related to the (1) identification of, (2) assessment of, and (3) response to supply chain risks, and (B) applicability of SCRM Reliability Standards to PCAs[iii].” Indeed, the Final Rule released last week requires NERC to act in both of those areas.

What actions did FERC require NERC to take? To take item (B) first, there weren’t any objections (in the filings received by FERC in response to the NOPR) to including PCAs in the scope of CIP-013-2. This was probably because

       i.          Since a PCA is always installed on the same routable protocol network as a component of a BES Cyber System (BCS), common sense seems to dictate that it should receive the same level of protection as the BCS; and

      ii.          Often, the same type of asset - such as an electronic relay - might be a BCS component in one case, but not in another case. The NERC entity is unlikely to use a different supplier for relays used in the first case than relays used in the second case, so including PCAs to the CIP-013 scope will not usually add much to an entity’s compliance workload.

Regarding item (A), FERC lists three NERC actions that they considered ordering, although in fact they only ordered two of them in their Final Rule last week:

1. To address the issue of NERC entities not identifying risks that were due to suppliers whose contracts predated the effective date of CIP-013-1 (October 1, 2020), NERC considered (and asked for comments on) the idea of setting a maximum time between risk assessments for a vendor – for example, a NERC entity would need to repeat a risk assessment for a vendor every three years. Thus, if a vendor’s contract had started on for example September 1, 2023, the vendor would need to conduct a new risk assessment by September 1, 2026.

This idea drew criticism from some commenters, but FERC decided to go ahead with it anyway. However, they changed it in one important way: They will let the Standards Drafting Team leave it to the individual NERC entity to decide what interval between risk assessments is appropriate. FERC also said the SDT could allow the entity to identify more than one interval, based on criteria they identify (for example, risks for software products are likely to change more rapidly than risks for devices. This means the mandatory assessment interval for a software product might be shorter than the interval for a device).

2. In their NOPR, FERC suggested they might require NERC entities to verify cybersecurity information that a supplier provides, for example, in response to a questionnaire from the entity. Since this would be almost impossible for a NERC entity to do without having to pay big bucks to get a major auditing firm to audit the supplier, I wasn’t surprised that the comments FERC received on this issue were mostly negative. I was also pleased that FERC realized this was a bad idea and dropped it.

3. The third and final question on which FERC’s NOPR solicited comments was “whether and how a uniform documentation process could be developed to ensure entities can properly track identified risks and mitigate those risks according to the entity’s specific risk assessment.[iv]

CIP-013-2 Requirement Part R1.1 states that the Responsible Entity must “identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services…” Notice that the word “mitigate” was left out here, even though it should have been included (the Purpose of CIP-013, listed in item 3 on the first page of the standard, is “To mitigate cyber security risks to the reliable operation of the Bulk Electric System (BES) by implementing security controls for supply chain risk management of BES Cyber Systems”).

I’ve heard that more than a few NERC entities, whether or not they noticed that “mitigation” was missing from Part R1.1, acted as if it was missing. That is, they diligently sent out security questionnaires to suppliers, but then either a) paid little or no attention to the responses, or b) evaluated the responses and noted which were unsatisfactory, yet never contacted the supplier to find out when and how they would mitigate that risk.[v]  

As you may have guessed, FERC’s Final Rule mandates that the revised version of CIP-013 (which will be CIP-013-3, of course) require NERC entities to identify, assess and mitigate vendor risks. However, FERC worded this a little differently. They stated (in paragraph 53 on page 33), “…we adopt the NOPR proposal and direct NERC to develop and submit for Commission approval new or modified Reliability Standards that require responsible entities to establish a process to document, track, and respond to all identified supply chain risks.”

In other words, instead of mandating that CIP-013-3 require NERC entities to “identify, assess and mitigate” supply chain risks to BCS, EACMS, PACS and (soon) PCAs, FERC wants entities to “document, track and respond” to those risks. In practice, these amount to the same thing. Nobody can document and track a risk without also identifying and assessing it. Moreover, responding to a risk is the same as mitigating it, since responding by doing nothing at all about a risk will no longer be an option in CIP-013 compliance.

Risk identification

As I’ve already implied, I consider CIP-013-1 and CIP-013-2 to have been largely a failure, at least in relation to what they might have been, if FERC had given NERC more time to develop and approve the standard. Fortunately, FERC is giving NERC 18 months to develop and approve CIP-013-3; I consider that an improvement, but 24 months would have been even better. This is because CIP-013 was the first what I call risk-based CIP standard (NERC calls these objectives-based standards, which in my opinion amounts to almost the same thing). Since the drafting team in 2016 didn’t have the time to consider what should be in a risk-based standard, the new drafting team will need to do that.

CIP-013 was risk based because FERC’s Order 829 of July 2016 specifically called for a “supply chain risk management” standard (my emphasis). FERC contrasted this with a “one size fits all” standard - i.e., a prescriptive standard that doesn’t take account of risk at all. Of course, in 2016 and even today, most of the CIP requirements are prescriptive, not risk based. But I want to point out that there doesn’t seem to be any real debate in NERC CIP circles anymore about which type of requirement is better. Cybersecurity is a risk management discipline, so cybersecurity standards need to be risk based. In fact, since CIP version 5 came into effect in 2016, literally every new CIP standard and requirement has been risk based.

The biggest problem with the first two versions of CIP-013 is they require the NERC entity to “identify” supply chain cybersecurity risks, but they provide no guidance on how to do that. This means that a NERC entity could literally comply with CIP-013 R1.1 – which requires the NERC entity to create a supply chain cybersecurity risk management plan – by writing “Supply Chain Cybersecurity Risk Management Plan” at the top of a sheet of paper, then writing below that, “We couldn’t identify any supply chain security risks at all; therefore, we have not taken any further steps required by this standard.” It would be very hard for the auditors to issue a “potential non-compliance” (PNC) finding for this.[vi]

The comments cited in FERC’s order last week make it clear that NERC entities want guidance on how to “identify” supply chain security risks. Since such guidance was missing in CIP-013-1 and CIP-013-2, I’m sure the legal staff at utilities told the CIP compliance people not to identify any risks beyond those behind the six mandatory controls listed in CIP-013 Requirement 1 Part 1.2. Of course, it’s a cardinal rule of regulatory legal practice not to expose the company to compliance risk beyond what is necessary.

I’ve been talking with CIP compliance people since 2008, when FERC approved CIP version 1, and I can attest that most of them prefer to comply with challenging CIP requirements than with watered-down CIP requirements or no CIP requirements at all. However, lawyers must protect the company (or the cooperative or the municipal utilities department); that requires reducing the compliance footprint as much as possible, which in turn precludes going beyond the strict wording of the requirement. By the same token, NERC CIP auditors can’t audit a requirement that just says the entity must “identify risks”, without giving any guidance on how that can be done. In other words, CIP-013 R1 was foreordained to fail.

The new drafting team can break this cycle by adding some “meat” to the risk identification bones of the current CIP-013-2 R1.1. What the team shouldn’t do is choose an existing risk management framework like NIST 800-161r1, “Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations” and require NERC entities to “comply” with it. Risk management frameworks are meant to cover the waterfront, as one glance at 800-161 will reveal. Only the largest organizations could dedicate the resources required to address all the provisions in that framework.

Instead, I think the new SDT should make it clear that the Responsible Entity needs to identify (say) ten important supply chain cybersecurity risks and just focus on those. In fact, they might suggest that the entity choose ten important risks from the North American Transmission forum’s (NATF) Supply Chain Security Criteria, which is my favorite “risk registry” for OT[vii] supply chain risks to the electric power industry.

While I used to think differently, I now realize it’s impossible to even itemize all the most important supply chain security risks, let alone mitigate a good portion of them. If you don’t currently maintain and utilize a supply chain security risk registry, it’s better to start somewhere than to wait until your organization has the resources and knowledge to develop a comprehensive risk management program, which may never happen.

Fortunately, since it will be a couple of years before taking this approach is mandatory, you have some time to try this out. You also have an incentive, since if you work for a NERC entity with a high or medium impact BES environment, you’re supposed to review and update your CIP-013 supply chain cybersecurity risk management plan every 15 months. Instead of keeping your plan as is, why don’t you find 5-10 risks that you can identify, assess and mitigate? At least, if you make a mistake, you won’t be facing a PNC (potential non-compliance) finding.

I’ll be glad to discuss this with you. Please drop me an email.

 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com or comment on this blog’s Substack community chat. If you would like to join my CIP Cloud Risks Working Group described in the italicized paragraphs at the end of this post, please email me.


[i] There is no Requirement Part 1.4.2 in CIP-013-1, but there is a Requirement Part 1.2.4. This might have been what FERC was referring to.

[ii] Supply chain risk management.

[iii] PCA stands for “Protected Cyber Asset”. These are Cyber Assets that don’t meet the definition of BES Cyber Asset, but which are installed on the same routable network as a BCA, or any component of a BES Cyber System. If compromised, they could be used to launch an attack on the BCS components on the same network.

[iv] Quotation from paragraph 49 on page 30 of FERC’s Final Rule.

[v] There’s a third possibility here: A NERC entity accomplished a) and b), meaning they got the supplier to confirm they had a security deficiency and promise to fix it. However, they never c) contacted the supplier again – more than once if necessary - to make sure they followed through on their promise. It’s important that a NERC entity accomplish all these steps for every supplier risk that was identified through a questionnaire, audit, news story, etc.

[vi] I’m sure this scenario hasn’t happened and won’t happen. However, I hear there are many NERC entities whose plan consists entirely of putting in place the six controls listed inCIP-013 Requirement 1 Parts R1.2.1 to R1.2.6. Since those six controls are mandatory, many NERC entities decided they were the only risks that they needed to identify in their supply chain cyber risk management plans, when in fact that wasn’t t why they were included in the standard. Those six controls were called out at random places in FERC’s 2016 order; the drafting team just gathered them into one Requirement Part. They were never meant to constitute the entire set of supply chain cybersecurity risks that NERC entities need to include in their plans.

[vii] For CIP-013 compliance, it’s important to identify OT supply chain risks. What I call “IT risks” (like those found in the NIST CSF and NIST 800-161) focus on protecting confidentiality of data. While that’s very important for banking OT systems, it’s not important for power industry OT systems, where availability and integrity are much more important.

Friday, September 19, 2025

What could possibly go wrong?

 

Note from Tom:

I have moved to Substack as my primary blog platform. If you want to see all my new posts, as well as my 1200+ legacy posts starting in 2013, please support me by becoming a paid subscriber to my Substack blog. The cost is $30 a year. Thanks!

The “Links” section of Dale Peterson’s weekly newsletter today contained this bullet point: “MITRE’s Project Homeland is trying to map US critical infrastructure.” Even though mapping critical infrastructure is a worthwhile goal that could bring lots of benefits, I must admit that, when I saw this, a bunch of red flags immediately appeared in my field of vision. After all, wouldn’t a map of US critical infrastructure be an early Christmas present for Valdimir Putin, Xi Jinping, and Kim Jong-Un?

I started to read the article, expecting to be quickly reassured that the leaders of this project, MITRE Corporation (whom I praised in my post just yesterday – for something completely different, of course) have security considerations firmly in mind and are going out of their way to protect this treasure trove of critical infrastructure data. I was reassured when I read the second and third paragraphs:

“As MITRE’s senior principal scientist, Philp has spent four years working to understand how America’s critical infrastructure systems are interconnected and where they’re most vulnerable.

“We’re more at risk today than we were in 2001,” said Philp, who has spent much of his career working on infrastructure vulnerability assessments. “The question is, with less money, how do we reduce the greatest amount of risk?””

However, I was soon disappointed. Here are some further quotes, in the order in which they appear in the article (my comments are in italics).

What emerged was something unprecedented: a spatial knowledge graph that could power dynamic visualizations showing exactly where critical infrastructure exists, how it’s all connected, and where those connections create the greatest vulnerabilities. (my emphasis)

             *  *  *

“The sheer number of infrastructure points and the intricate web of connections among them were staggering…The graph revealed not only the complexity but also enabled staff to see each entity, such as a hospital, in isolation related to its dependency on water and power.”

When you’re talking about power connections, you need to be quite clear about what you mean. You could say that, within each Interconnect in North America (the four are the Eastern Interconnect, Western Interconnect, ERCOT – which covers a large part of Texas - and Quebec) every power source, no matter how mighty, is “connected” to every residence, no matter how humble.

Of course, if you include each of those connections in your map, or even just the major ones, the map will be close to black with power connections. However, if you ask the really important question, “How many hospitals will lose power – or at least have to go on backup generation – if there’s a total outage at Grand Coulee Dam (the largest power source in North America)?”, the answer should usually be “None”.

This is because each Interconnect has lots of redundancy built into it. It’s the job of the ISOs/RTOs and the Reliability Coordinators to make sure that, at literally every second of the day and night, there are backup power sources (and preferably backups of backups) ready to cover for every possible contingency – such as a power plant unexpectedly going down at that moment. Utilities are closely monitored for how good a job they do of keeping the lights on.

On the other hand, there’s certainly some combination of power sources, the loss of which would bring down a substantial number of – say - hospitals in one of the Interconnects. If you’re trying to cause such an event, MITRE’s map would probably be very helpful.

               *  *  *

The map and graph together shed light on not just infrastructure networks but also human networks such as the highly skilled workers who maintain the infrastructure. The graph can reveal who works with whom, while the map shows where they work and can even track their location in real time.

  *  *  *

The team gathered detailed data about critical infrastructure and then used graph data science tools in ArcGIS Knowledge to analyze dependencies, revealing the web of vulnerabilities from the national scale down to individual city blocks. In Fort Lauderdale, for example, the system could show how a flood affecting one neighborhood’s electrical substation might upset water treatment systems, hospitals, and emergency services across the region.

Of course, the effects of a flood in a substation would be similar to those of a cyber or physical attack on the substation. The most chilling example of the latter is the Metcalf attack.

My guess is that, if someone writing the article had asked MITRE what risks the map itself might pose, they would have been assured that the risks were very low, since each of these assets is very well protected against both cyber and physical attacks. Moreover, the map doesn’t reveal IP addresses, firewall types, or any other information that could be used to launch an attack on one or more assets.

That is most likely true, but it completely misses the main point: The map itself, if it fell into the wrong hands, might be a great tool for plotting a massive physical or cyber attack on the grid. For example, you might use the data from the map to answer the question, “Which generating facilities and substations would we need to take out, to bring down most of the hospitals in City X?”[i] I’m sure there’s not enough data to get an exact answer to this question, but at least the map will put you on the road to having that answer.

Were there any statements in the article that warned of the dangers of gathering so much critical infrastructure information in one huge map? Not even one. The closest to a warning statement that I found was this one: “MITRE needs cutting-edge technology from trusted partners—like Esri—that are committed to protecting sensitive customer data.” This isn’t a warning about the map at all, but just a pledge to protect sensitive data of the users of ESRI’s software.

I’m not saying that MITRE should abandon this project, since the map will be incredibly useful in the case of physical disasters like hurricanes. But they obviously need to start thinking about how they’ll protect access to the map itself, not just “sensitive customer data”. This isn’t a map of risks; rather, the map itself is the risk.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com or comment on this blog’s Substack community chat.


[i] Why would someone want to execute such an attack? Certainly, a terrorist might want to. But what’s often overlooked is the opportunity to make money in financial markets by short selling for example healthcare stocks or municipal bonds, before launching an attack like that.

Thursday, September 18, 2025

Thanks, but no thanks, CISA

Note from Tom:

I have moved to Substack as my primary blog platform. If you want to see all my new posts, as well as my 1200+ legacy posts starting in 2013, please support me by becoming a paid subscriber to my Substack blog. The cost is $30 a year. Thanks!

Last week, my friend Patrick Garrity of VulnCheck – the most respected vulnerability researcher cum skateboarder in the world – posted on LinkedIn about a paper that CISA put out last week titled “CISA Strategic Focus: CVE Quality for a Cyber Secure Future”. The paper describes what CISA would like to do to improve the CVE Program. For a short 101-level overview of that program, go here.

Since the CVE Program isn’t run by CISA, you may wonder why CISA is concerned about improving the program. The answer is that CISA fully funds the CVE Program, although it is operated by the MITRE Corporation, a (nonprofit) Federally-Funded Research and Development Center, under the direction of the Board of the nonprofit CVE.org. CISA fully funds MITRE’s contract, which costs, as I’ve heard from different sources, somewhere between $44 and $57 million per year.

On April 15, the international vulnerability management community was shaken by a letter sent by MITRE to the members of the CVE.org board; the letter indicated their contract wasn’t going to be renewed by CISA and the program would have to shut down the next day. The letter caused a veritable firestorm of concern and criticism, which immediately bore fruit. By the next day, MITRE announced that everything was hunky-dory again, since the contract had been extended after all.

But everything really wasn’t hunky-dory. The fact that the contract had almost been cancelled and that CISA was (and still is) planning to cut a lot more people, led me and many others to conclude that it was close to certain the contract won’t be renewed next March (don’t ask me why the renewal was in April this year but will be in March next year. Such questions are above my pay grade).

Fortunately, literally at that moment a white knight appeared on the horizon, in the form of a new international non-profit organization called the CVE Foundation. The Foundation’s Board is comprised entirely of longtime members of the CVE.org Board, including Lisa Olsen of Microsoft (an important contributor to the CVE Program, who is also now Executive Director of the Foundation), my friend Pete Allor of Red Hat, and Dave Waltermire of NIST. I was quite impressed when their lineup was finally announced a few weeks after April 16, the day the Foundation was officially launched.

The Foundation’s board members have all been in the thick of discussions about what’s needed in the CVE Program during the 25 years that CVE records have been reported and disseminated. In fact, one of those board members has been involved with CVE since 1999 (that was the year CVE records started to be disseminated. That year, around 350 CVEs were identified. This year, probably around 45,000 new CVEs will be identified – and those are still just the tip of the iceberg). While the board members haven’t put out a plan for the changes they want to make, I know they’re already working hard on them.

However, there’s something else that the Foundation’s board members have been working on: fundraising. They have been approaching private organizations and government agencies worldwide and are getting a great response. They already have a lot of funds committed; they’re sure they will have more than enough funds available when it comes time to buy out MITRE’s contract next March.

Which brings me back to CISA. They are now making a big effort to make amends for their mistake in April and are campaigning hard to keep the CVE Program under their belt. Their document describes a lot of nice things they pledge to maintain or put in place. Here are three of them.

1. Good governance

Without naming the CVE Foundation, CISA’s document attacks the Foundation’s proposal to take over funding and running the CVE Program – in partnership with MITRE. They call this “privatization” and imply that governance would suffer because the Foundation won’t be able to ensure “conflict-free and vendor neutral stewardship, broad multi-sector engagement, transparent processes, and accountable leadership.” Given the chaotic history of CISA since the new administration came in, including the many threats to close the agency entirely, the complete elimination of entire programs (and their staffs) without any attempt to demonstrate why this was necessary, and most importantly the outright hostility exhibited to longtime employees before they were terminated (as if they were doing something wrong just by being employed by CISA), this assertion seems a little out of place.

2. “Public good”

The second section of CISA’s document includes these two sentences: “Privatizing the CVE Program would dilute its value as a public good. The incentive structure in the software industry creates tension for private industry, who often face a difficult choice: promote transparency to downstream users through vulnerability disclosure or minimize the disclosure of vulnerabilities to avoid potential economic or reputational harm.”

Essentially, this is saying that accepting money from software companies – along with government agencies from all over the world, nonprofit foundations, and many other types of organizations – will inevitably corrupt the CVE Foundation, since software companies face the “difficult choice” of whether to disclose vulnerabilities.

I don’t deny that software companies face that choice, but I can attest that at least the larger software companies (who produce a huge percentage of all commercially available software products) have almost all made the choice for the side of Virtue, since they’re the biggest advocates for (and funders of) software security. In fact, I’m sure that over 95% of CVE records are generated by either

1.      A CNA that works for the software company (or open source community like the Linux Foundation or GitHub) that developed or supports the software (e.g., Microsoft, Oracle, HPE, Red Hat, Cisco, Siemens, Schneider Electric, etc.), or

2.      A CNA that the developer approached to create the record (usually because the developer is in the CNA’s “scope”).

In fact, on the second page, CISA writes, “Many in the community have requested that CISA consider alternative funding sources. As CISA evaluates potential mechanisms for diversified funding, we will update the community.” Of course, given the extreme pressure on the entire federal government to cut costs as much as possible, it’s quite understandable that CISA would want to look for alternative funding sources. Setting aside the question of whether they would be allowed to accept funding from outside the government (see below), it’s worth noting that one of the most likely prospects to help CISA out is…you guessed it: large software companies.

3. Dump MITRE?

Patrick Garrity pointed out in his LinkedIn post that CISA’s document never mentions MITRE. Patrick speculates this means CISA is considering not renewing MITRE’s contract next March, even though they clearly want to keep the CVE program going. Unfortunately, CISA is deluded if they think they can keep the program going by themselves, let alone improve it. MITRE staffs the whole CVE Program now (along with many volunteers, most notably the 470+ CVE Numbering Authorities). They have been running the program since 1999, when two MITRE researchers came up with the CVE idea and described it at a conference.

The MITRE team reports to the Board of Trustees of the nonprofit CVE.org; that board includes representatives from government (including CISA and the National Vulnerability Database - NVD), as well as private industry (as mentioned earlier, the entire board of the CVE Foundation consists of current members of the CVE.org board). While there are certainly things MITRE has not done well in their many years running the CVE Program, it would be hard to find anyone knowledgeable about the situation who says MITRE’s work hasn’t overall been good, if not excellent.

Of course, I’m sure CISA management thinks they can do better than MITRE at running the program. If they drop the MITRE contract, they will presumably have a lot of money available to lavish on their own people. One of those people was Edward Coristine. He was listed as a Senior Advisor to CISA in February, having been installed by the “Department of Governmental Efficiency” or DOGE (Coristine had a famous nickname that I can’t repeat here, since this is a family blog).

Mr. Coristine had success in the cybersecurity field while still in high school (which wasn’t long ago, since he was 19 when he was at CISA. He’s either 19 or 20 today). He must be quite good at whatever he does, since his company, DiamondCDN, was complimented by a customer called EGoodly. They posted on Telegram, “"We extend our gratitude to our valued partners DiamondCDN for generously providing us with their amazing DDoS protection and caching systems, which allow us to securely host and safeguard our website…”

What kind of company, pray tell, is (or was) Egoodly? They were described by Reuters (in the article linked above) as “a ring of cybercriminals”. Perhaps I’m old-fashioned, but it doesn’t seem to me that someone who has done work for cybercriminals should be installed as a senior advisor to CISA (with access to their most sensitive systems, of course). At the very least, one would expect that CISA’s (and DHS’s) management team would have requested a background check first – and if it was refused, they would have refused to give Mr. Coristine access to any system, except perhaps the cafeteria menu system. But it seems there was no background check.

Of course, I’m sure that CISA management last February and March was under tremendous pressure to do whatever DOGE told them to do. Even if DOGE demanded system access for Vladimir Putin, it would probably have been granted. I guess we can at least be happy that didn’t happen.

Nevertheless, I think this incident alone should disqualify CISA from taking over operation of the CVE Program from MITRE next year. The CVE Foundation is much more qualified, experienced, and connected than whoever happens to be in charge of CISA this month. They will be able to raise much more money than CISA could raise on their own – even if CISA were allowed to raise money from private sector organizations (of course, they’re not. That’s known as bribery). And of course, whatever money CISA has today may very well be gone tomorrow. That’s how things happen in Washington nowadays.

Most importantly, the CVE Foundation will build on MITRE’s vast experience, starting with their “invention” of CVE. I know for a fact they won’t let MITRE just continue to do the same old same old, but I also know for a fact that the MITRE staff members I know are quite motivated to make improvements (and they’re continually making them now); they also don’t want the same old same old. Next year, working with the CVE Foundation, they’ll continue to make improvements, and even pick up the pace. The CVE Foundation is making big plans.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com or comment on this blog’s Substack community chat.