Tuesday, December 7, 2021

Comments by Kevin Perry and Dale Peterson on my last post


Both Kevin Perry and Dale Peterson made good comments on my last post, and both of them raised important issues. A lot of people viewed this post already, but you might want to check it again to see those comments (and my comments on Dale’s comments). 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Friday, December 3, 2021

The latest ransomware attack on critical infrastructure

I was surprised to read this story from ZDNet this morning, describing yet another devastating cyberattack on a critical infrastructure organization, this time an electric utility. Of course, even though the utility, Delta-Montrose Electric Association (DMEA) in Colorado, never used the word “ransomware” in their announcement of the attack, everyone interviewed for the article seemed to think it was a ransomware attack.

But even if it wasn't, what I'm most interested in is the fact that, by the utility's own reckoning, 90% of its internal systems (which I interpret as "IT network") were down. Yet the utility says their electric operations weren't affected at all. This simply shows that the utility followed a cardinal principle for critical infrastructure: Complete separation of the IT and OT networks, so there is no direct logical path by which an infected IT system might infect the OT network. 

I know of two other devastating ransomware attacks on critical infrastructure in the US. In both of these, the IT network was pretty much destroyed by the ransomware, while the OT network wasn’t touched by it. Yet in both of these cases, the OT network had to be completely shut down in the wake of the attack, along with the IT network. How could that happen, assuming the networks really were separated?

The more recent of these two attacks was, of course, Colonial Pipeline. In that attack, after most or all of the IT network was brought down, the OT network (and therefore the pipelines themselves) were also brought down. Colonial said at the time that they did this out of the usual "abundance of caution". However, a WaPo editorial pointed out that, with Colonial’s billing system (which was on the IT network, a normal practice even in critical infrastructure) being down, Colonial couldn't invoice for gas deliveries.

Even more importantly (and this was a fact I learned from somebody who commented on one of my posts on Colonial – my longtime friend Unknown), since Colonial is a common carrier and doesn't own the gas it delivers, they would literally have been on the hook for the entire cost of all the gas they delivered but didn't invoice, if they'd continue to run their pipeline network. So the OT network had to come down as well, even though it wasn't directly impacted by the ransomware.

The previous attack was in 2018, when a very large US electric utility was hit with a devastating ransomware attack. As with Colonial and DMEA, the IT network was completely down but the OT network hadn't been directly affected by the ransomware. The IT department decided they had to wipe over 10,000 systems on their IT network and rebuild them from backups. According to two independent sources, the original plan was to leave the two grid control centers (part of the OT network) running during the approximately 24 hours it would take to do this.

However, the utility then decided that if they left the OT network running, they would run the risk that even a single system in the control center might have been infected, and then might re-infect the IT network as soon as the latter was brought back up – meaning they’d have to repeat the entire process of wiping and rebuilding. So they decided they had no choice but to wipe and rebuild the control center systems (about 2,000), including their VOIP phone system.

The result was that for 24 hours, the power grid in a multi-state area was run by operators using cell phones. It was pure luck that a serious incident didn't occur during this time, because power system events usually happen too quickly for humans to react properly; the event would probably have been over long before the control center staff would have been able to diagnose the problem and the solution through phone calls.[i]

In both of these previous attacks, the OT network was logically separated from the IT network, but from a larger point of view it wasn’t. In Colonial’s case, the problem was that there was a system on the IT network (the billing system), which had to be up in order for operations to continue. How could the OT shutdown have been prevented? Clearly, the billing systems should either have been on the OT network, or (since that might itself have caused problems), they should have been on a network segment of their own. The ransomware would never have reached them, and they would presumably have continued to operate after the attack. Thus, operations wouldn’t have had to be shut down.

And what could the utility in the second attack have done differently, to prevent having to shut down their control centers? It seems to me that the root cause of that shutdown was the fact that the utility didn’t trust its own controls, put in place to prevent exactly the sort of traffic between networks that they were worried might get through.

In case you’re wondering, these were high impact Control Centers under NERC CIP-002 R1. The CIP requirements that govern separation of networks, CIP-005 R1 and R2, should have been sufficient to prevent spread of malware between the networks. However, it’s also very easy to violate those requirements, and my guess is that somebody in IT didn’t want to take the chance that someone had slipped up in maintaining the required controls, no matter how unlikely it was that they had.

I guess the moral of this second story is that, if you’re ever in doubt about whether your IT and OT networks are really separated, you should take further steps to remove those doubts. That way, if this incident happens to your organization (utility, pipeline, oil refinery, etc.), you’ll be able to leave the OT network running without suffering a nervous breakdown.

Thus, the fact that DMEA was able to continue delivering electric power to their customers (although with delayed billing) shows they not only had the required separation between their IT and OT networks, but they also

a)      Didn't have any system dependencies linking their IT and OT networks, as in the case of Colonial, and

b)     Had enough controls in place that they didn’t doubt that the two networks were really logically separated.

Good for them!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] This incident was reported in the press at the time, but the utility’s announcement said that operations weren’t “affected”. They weren’t affected in the narrow sense that there was no outage. However, the loss of visibility in their control center itself was an “effect” on the grid, and the utility should have reported it to DoE as such.

Tuesday, November 23, 2021

Patrick Miller gets SBOMs (almost) right


Noted power industry cybersecurity consultant Patrick Miller recently put up a blog post consisting of a recorded interview and transcript of a discussion of software bills of materials. He did a good job of articulating the most important argument for having SBOMs: they will allow software users to find out about dangers lurking in components of the software they run, vs. being totally blind to those dangers.

However, Patrick gets off track when he discusses how the users will actually learn about those dangers, in this passage:

PATRICK:

Effectively what happens is (the users) take (the SBOM) from the manufacturer, they use a tool to compare the two, and they know what's in there. They can say, 'Yes, all the things we expect are in there and nothing else is in there. They didn't insert garlic into my soup and now I'm going to get sick,' for example.

GAIL:

As part of the critical infrastructure, I ask for the SBOM, the manufacturer produces an SBOM, I run my tool against what they've given me, and it comes back a little different. What happens then?

PATRICK:

Well, that's when you have a risk. You have to have the conversation with the vendor to find out: did you actually get what you expected?

In some cases, there are things like file integrity and certificates that can go a long way to helping this. In some cases, it may just be that there's a mismatch in the tools. But that's an interesting conversation for you to have with your vendor and should be the first thing you do if things don't match up. You need to talk to your vendor right away, because things should line up.

I think Patrick is describing a sequence like this:

1.      The user receives software (either new or an update to software they’re already using); the supplier provides an SBOM with it.

2.      As a check on the supplier, the user uses a binary analysis tool to create an SBOM from the delivered software binaries (this isn’t an easy thing to do for various reasons, but it’s possible for someone with the right software development experience).

3.      The user compares the SBOM they received from the supplier to the one they created and looks for discrepancies.

4.      If they find a discrepancy, the consumer talks with the supplier to find out whether this was due to an innocent mistake, or whether it was a deliberate attempt to hide something – perhaps there’s a vulnerable component in the software and they don’t want the customers to know about it?

However, this sequence simply couldn’t happen, for several reasons. First, SBOMs are almost all generated by automated tooling on the supplier side, usually as part of the software build process – and I’m talking about machine-readable SBOMs, which are the only ones that scale. There are single software products, including one that’s installed today on millions of corporate desktops across the US, that have upwards of 5,000 components; the average product contains over 100 components. It’s kind of hard to create those SBOMs by hand. While it would certainly be possible to edit the JSON or XML code that constitutes the SBOM after it’s been produced, it wouldn’t be easy.

More importantly, it’s just about 100% certain that there will be a mismatch between an SBOM produced from binaries (as in step 2 above) and one that’s created with the final software build, without the supplier having to obfuscate anything. There are various reasons for this, including that there are almost always other files or libraries that aren’t part of the software as built, but that are essential for it to run; the binary analysis will capture these files, as well as those for the “software itself” (and unfortunately, the fact that SBOMs created from the build process don’t include files installed with the software means that the SBOM your supplier sends you will almost always leave out these additional files – and therefore you won’t learn about vulnerabilities in those files. This is an ongoing problem with SBOMs, although it’s certainly not a fatal one).

A bigger reason is the one I alluded to in step 2 above: creating SBOMs using a binary analysis tool is always going to be much harder than creating one through the build process, since the output from the tool will inevitably miss a lot of components and misidentify others. That’s why there is a lot of judgment required to clean up the output from the tool; and even after applying that judgment, the resulting SBOM will virtually never match an SBOM produced from the final build.

Binary analysis is required in order to learn anything at all about the components of legacy software, for which no SBOM was produced at build time (i.e. just about all legacy software today). It’s like a colonoscopy: It has to be done, but it certainly isn’t a lot of fun.

More importantly, if the supplier decides to alter one SBOM (perhaps by renaming a few vulnerable components with the names of non-vulnerable components), they are going to have to replicate this work in future SBOMs for the same product. A new SBOM needs to be generated with every change to the software, which means with every new version and every patch application. The devious supplier will have to make the same change in all of these new SBOMs.

But the biggest reason why Patrick’s scenario is highly unlikely is this: Why should the supplier go to a lot of trouble to obfuscate vulnerable components, when virtually any SBOM you look at has vulnerable components (i.e. components for which one or more open CVEs are listed in the NVD) – and it’s certain that new vulnerabilities will be identified in those components in the future?

The problem isn’t so much that there are vulnerable components. The real question is how the supplier deals with component vulnerabilities, whenever they appear.

A 2017 study by Veracode stated “A mere 52 percent of companies reported they provide security fixes to components when new security vulnerabilities are discovered.” I would hope that number would be higher nowadays, but one thing is certain: It will definitely be higher, once most software suppliers are distributing SBOMs to their customers. Components in the software will have fewer vulnerabilities, plus suppliers will work to patch those vulnerabilities more quickly than they would have otherwise (if they would have patched them at all).

Is this because the suppliers have all just experienced a Road to Damascus moment, and decided to completely change their former ways? No, it’s because it’s human nature to pay closer attention to doing your work correctly when somebody is looking over your shoulder. That’s why the SBOM effort by NTIA (and now CISA) is called the Software Component Transparency Initiative. It’s all about transparency.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, November 22, 2021

Another good SBOM webinar


Cybeats is going to unleash the fourth in their great series of webinars on software bills of materials on November 30 from 1-2 Eastern Time. Here’s their description of this one:

Software Bills of Materials (SBOMs) are beginning to arrive at critical infrastructure operators’ doors. The promises of much more rapid responses to cybersecurity vulnerabilities and other tangible benefits are waiting in the wings to be proven or disproven. What will be the impact of this new form of visibility and information sharing actually be?

Tune in on November 30th to learn more on how SBOMs may well prove critical to securing our critical infrastructures with our fourth episode of The State of Cybersecurity Industry: SBOMS impact on critical infrastructures.

The guests are:

·        Dr. Allan Friedman of CISA, leader of what was previously called the Software Component Transparency Initiative when he was at NTIA. The initiative continues, but its new contours will be discussed in December.

·        Ginger Wright of INL, my co-leader of the SBOM Energy Proof of Concept.

·        Tim Roxey, who has appeared in these posts a number of times in the past, and always has quite interesting things to say.

·        Chuck Brooks, who I only met recently but seems to have a good instinct for where the needle is moving in cybersecurity.

The event link is here. You don’t have to register, but they encourage you to; the button for that is on the same page.

See you then!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Friday, November 19, 2021

Where are we going? How will we get there?


When I’m looking for guidance on a decision, I often turn to the great 19th century scholar Charles Dodgson, who wrote on mathematical logic. His two greatest treatises on that subject were written under the pen name Lewis Carroll: Alice in Wonderland and Through the Looking Glass.

Near the beginning of the first treatise, after Alice has fallen down the long rabbit hole and emerged in Wonderland, she has no idea where she is and has the following exchange with the Cheshire Cat:

Alice: ‘Would you tell me, please, which way I ought to go from here?’
The Cheshire Cat: ‘That depends a good deal on where you want to get to.’
Alice: ‘I don't much care where.’
The Cheshire Cat: ‘Then it doesn't much matter which way you go.’
Alice: ‘...So long as I get somewhere.’
The Cheshire Cat: ‘Oh, you're sure to do that, if only you walk long enough.’

What has been known until now as the Software Component Transparency Initiative of the National Technology and Information Administration (part of the US Department of Commerce) finds itself currently in somewhat the same position as Alice. The leader of the Initiative, Dr. Allan Friedman, moved a few months ago from the NTIA to CISA (which is of course part of the Department of Homeland Security).

The Initiative is a “multistakeholder process” – a special type of “organization” that the NTIA has deployed in many situations (there is currently a large multistakeholder process going on for 5G – much larger than the one for SBOMs). The idea is to have participants in an industry get together to agree on rules that apply to a new technology, without even mentioning the dreaded word “regulation”. However, CISA does things differently (although they aren’t interested in becoming a regulator any more than NTIA is, as their Director Jen Easterly made clear just last week), so this process can’t continue. And one can argue that the multistakeholder process has now outlived its usefulness, anyway.

There is agreement among the people who have been participating in the Initiative, that we would like to continue in some form. It is to discuss what that form will be, as well as to provide general instruction on what SBOMs are and how they can be used, that Allan has scheduled the first annual (hopefully) CISA “SBOM-a-rama” for December 15 and 16, at 12-3 PM ET on both days. This will be a two-day event:

1.      Allan describes the first day thusly, “The first session will focus on education, bringing the broader security and software community up to speed with the current understanding of technology and practices, and offer the opportunity for some questions and answers for those relatively new to the issue and technology.”

2.      Here’s his description of the second day: “The second day will focus on identifying the needs of the broader community around SBOM, and areas of further work deemed necessary for progress. This could include specific technical issues and solutions, operational considerations, or shared resources to support the easier and cheaper generation and consumption of SBOM and related data.” This is where I expect the two questions listed in the title of this blog to be asked. As long as there is agreement on at least the first question, I’ll be happy with that. Discussion beyond that will be exploratory, but will continue in future meetings, however they’re structured.

Who’s eligible to attend this. The requirements are quite rigorous, I’m afraid:

1.      You must have a working command of the English language.

2.      You must have an interest in SBOMs and how they can help you secure your organization, even if you know very little about them.

3.      You don’t have to have software development experience. If that’s a requirement, I can’t attend either.

I’ll publish the meeting information when it’s available.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Wednesday, November 17, 2021

Cloud providers are wasting their time pursuing NERC CIP

This week, my friend Maggy Powell of AWS put up a post on LinkedIn that provided a link to their most recent document regarding NERC CIP, described by Maggy as the “AWS User Guide to Support Compliance with NERC CIP Standards”. She further states that “The User Guide describes key concepts for customers considering CIP regulated workloads in the cloud.”

Dale Peterson asked me for my comments on the document. Before I downloaded it, I pointed to this post from last year, where I tried to summarize the problem preventing NERC entities from deploying Medium or High impact BES Cyber Systems in the cloud (they’re free to deploy Low impact BCS in the cloud now). So I reviewed (skimmed, I’ll admit) the AWS document to see if it had anything to say that would change the situation enough to make it at least possible that Medium or High BCS could be put in the cloud.

It didn’t. Like the document and presentation that Microsoft Azure prepared for the NERC CIPC (remember the CIPC?) in around 2016, AWS seems to think that what needs to be done is just convince NERC and utilities that AWS has good security. That has nothing to do with the real problem, as my previous post explains. There’s literally nothing that AWS, Microsoft, or anyone else – other than NERC, the Regions, the NERC entities, and FERC – can do to change the situation, absent a wholesale revision of the CIP standards. I replied to Dale:

I skimmed through the AWS document, but it was unfortunately as I expected. It tells you everything you need to know about AWS security, except the one thing that matters for CIP: How AWS could possibly produce the evidence required for the utility to prove compliance with about 25 of the CIP requirements, if they put BCS in the cloud.


And the answer to that question remains what I wrote last fall: There's no way any cloud provider could do that, without breaking their business model.


NERC CIP won't permit BCS in the cloud until it's completely rewritten as a risk based compliance regime (which involves revising the NERC Rules of Procedure as well). What's also required is for the focus on devices to go away, and the new focus be on systems. This is exactly what the CIP Modifications SDT proposed in 2018 (a year or so after Maggy left as chairperson), and it got shot down by the big utilities, because they didn't want to have to make big changes to their procedures, etc.


That's the barrier. Until that's overcome, BCS will never be in the cloud, period. I don't see any movement toward this currently, but I'd be glad to help out the insurrectionists if they materialize.

I’ll close by paraphrasing the ending to my post linked above:

Of course, changing CIP will require a much more fundamental revision of the CIP standards than even CIP version 5 was. Doing what I’m suggesting will require widespread support among NERC entities, and I see no sign of that now. Does that mean BCS will never be allowed in the cloud?

I actually believe it will happen, although I won’t say when, because I don’t know (it definitely won’t be soon). I think the advantages the cloud can provide for NERC entities are so great that they will ultimately outweigh the general resistance to change. But the NERC entities themselves need to be able to change. Until that happens, there will be no BCS in the cloud, period.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, November 14, 2021

It seems DoD is willing to admit it made a mistake. Next, the TSA?

Last week, Mariam Baksh of NextGov wrote quite an interesting article about how the Defense Department is rethinking its Cybersecurity Maturity Model Certification (CMMC) program for certifying the cybersecurity of defense contractors through third-party audits. Of course, rethinking a program isn’t too unusual in the government (or in private industry), but doing so a year after the program was launched is quite unusual.

However, I’m glad they did this, since I now realize – after not having paid much attention to the program before, to be honest – that CMMC would have been a disaster if it had really been implemented as written. I’m glad to see that DoD is going to actually – get this – consult with the contractors being regulated as they revise the program.

This is in contrast with the TSA, which gave the pipeline industry only three days to comment on the cybersecurity order they were developing, and enjoined anyone who had seen the order from revealing anything about what’s in it. Although I can’t particularly blame them for that last part, since what’s in the order is pretty  embarrassing.

The big problem: The TSA order requires a bunch of mitigations that are impossible to achieve (“Prevent users and devices from accessing malicious websites…”? Piece of cake! All you have to do is identify all of the “malicious websites” in the world, update the list minute-by-minute, and seamlessly block every URL. What could be simpler?). The only mitigation it doesn’t require is what would have actually prevented the Colonial Pipeline attack. It seems that wasn’t even considered (aka “The light is better here”).

Fortunately, I’m certain the order will never be implemented, since some consultation with the pipeline industry would have shown TSA that full compliance with the order would probably be beyond the means of any pipeline company, period; and in the end, no regulation that literally can’t be complied with will be allowed to stand. Which brings me back to the CMMC. That’s based on NIST 800-171. This document has many more requirements than the TSA order, although they’re much more – how can I say it? – sensible than the TSA requirements. However, NIST 800-171 shares with the TSA order the fact that it lists mitigations, not risks.

It also shares with the TSA order the fact that it doesn’t address the most important risk in the domain being addressed. NIST 800-171, which is a supply chain cybersecurity risk management standard, omits any mention of software supply chain cyber risks, which are without doubt the most important supply chain risks today (my guess is 800-171 would be very different if it had been written after SolarWinds and EO 14028).

Cybersecurity is inherently a risk management process, requiring three steps of the organization:

1.      Identification of the high-level risks to be mitigated - i.e. the cybersecurity “domains” being addressed;

2.      Identification of low-level risks included in each domain, that are applicable to the organization and the environment in which it operates; and

3.      Identification of appropriate mitigations for those risks – meaning appropriate for the organization and the environment in which it operates.

Any cybersecurity standard needs either to require that the entities being regulated take these steps, or – if whoever drafts the regulations doesn’t trust those entities – take them on its own, and simply require the entities to implement mitigations for the risks that the regulator has identified (i.e. what I and others call the prescriptive approach). The latter is the approach that both the TSA pipeline order and CMMC/NIST 800-171 take. It’s also the approach that most of the NERC CIP requirements that were written as part of CIP v5 take (e.g. CIP-007 R2 and CIP-010 R1. Fortunately, literally all of the CIP requirements and standards written after CIP v5 are risk-based, since the industry seems to have finally learned its lesson about prescriptive cybersecurity requirements).

The prescriptive approach would work great if

A.     Whoever wrote the requirements had perfect knowledge of all current and future risks in the domain being regulated – e.g. pipeline or Bulk Electric System operations;

B.     Those persons also had perfect knowledge of all current and future mitigations for those risks, and could choose the best ones;

C.      The entities being regulated are similar enough that the risks and mitigations that are appropriate for one will be substantially the same as for another (of course, we know well that’s true in the power industry, where there’s very little difference between say ConEd and a coop in the middle of Nebraska); and finally

D.     The requirements are written so that, taken as a whole, they won’t pose an undue burden on an organization of any size, given that I know of no organization that has an unlimited budget for cybersecurity mitigation.

Needless to say, no person and no single organization meets the first or second criteria, and very few if any groups of regulated entities meet the third criterion. As for the fourth criterion, it would also take a group of people with godlike powers of perception to draft such requirements. The problem is that people who aren’t gods will inevitably err on the side of over-regulation. They’ll list a requirement for everything they can think of that could be important, with no consideration of whether having to meet every requirement might literally bankrupt most of the organizations that have to comply with the requirements.

Then how should a cybersecurity regulation be written? I’m glad you asked that. Looking at the three steps listed above, I believe the regulation itself should accomplish the first step – that is, the regulation should identify the high-level risks to be mitigated. This at least gives the organizations being regulated a place to start, as opposed to telling them to identify risks starting with a blank piece of paper.

Then the organization being regulated should take the second two steps on their own, although with oversight (and advice) from the regulator:

2.      Identify low-level risks included in each domain, applicable to the organization and the environment in which it operates; and

3.      Identify appropriate mitigations for those risks – meaning appropriate for the organization and the environment in which it operates.

Are there any cybersecurity requirements or standards that require these three steps – and nothing more? I’m sure there are a few, but the closest requirement that I know of is NERC CIP-010 R4, the requirement for cybersecurity of “Transient Cyber Assets” (e.g. laptops) and “Removable Media” (e.g. USB drives) used temporarily at power facilities like substations. This requirement doesn’t actually mention risk at all, but it requires a plan that includes ten sections describing mitigations for specific risks like “Introduction of malicious code”, “Software vulnerabilities” and “Unauthorized use”. These are high-level security domains, for each of which the utility has to develop appropriate mitigations.

The requirement even suggests high-level mitigations in each domain, that the utility might decide to implement. And these mitigations almost always provide the option of “Other methods…” of the utility’s own choosing. However, if the utility decides to implement another method, they need to convince an auditor that the method they chose does as good a job of mitigating the risk in question as the examples listed in the requirement.

How about NERC CIP-013? That’s definitely a risk management standard, but it doesn’t list high-level risks. It just tells the utilities to identify supply chain cybersecurity risks on their own, without stating that they should consider domains like software security, manufacturing security, software vulnerability management, etc. Therefore, in my opinion, it doesn’t make the cut. CIP-013 also doesn’t require mitigations at all – although that was clearly due to a simple oversight by the drafting team.  

So I’m glad to see that DoD is going to revise the CMMC program, since I simply don’t see how it can be fully implemented as written. And I’m especially glad to see that they’re going to get input from the contractors who will be regulated (some of them, anyway), rather than trying once again to require them to address every cybersecurity requirement that they could think of, with no regard for what’s the best way for contractors to mitigate the most cybersecurity risk possible on a non-infinite budget.

Of course, I can’t blame the folks at DoD for thinking that other organizations have infinite budgets. If DoD says something is needed and asks for it passionately, they’ll get it. And if they keep asking for more and more and that results in higher costs, they’ll get the funds needed to meet those higher costs, too. Plus, if a bunch of nosy reporters ask about the reasons for those higher costs, the answer will be – of course – classified. Just look at the F-35.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.