Sunday, November 14, 2021

It seems DoD is willing to admit it made a mistake. Next, the TSA?

Last week, Mariam Baksh of NextGov wrote quite an interesting article about how the Defense Department is rethinking its Cybersecurity Maturity Model Certification (CMMC) program for certifying the cybersecurity of defense contractors through third-party audits. Of course, rethinking a program isn’t too unusual in the government (or in private industry), but doing so a year after the program was launched is quite unusual.

However, I’m glad they did this, since I now realize – after not having paid much attention to the program before, to be honest – that CMMC would have been a disaster if it had really been implemented as written. I’m glad to see that DoD is going to actually – get this – consult with the contractors being regulated as they revise the program.

This is in contrast with the TSA, which gave the pipeline industry only three days to comment on the cybersecurity order they were developing, and enjoined anyone who had seen the order from revealing anything about what’s in it. Although I can’t particularly blame them for that last part, since what’s in the order is pretty  embarrassing.

The big problem: The TSA order requires a bunch of mitigations that are impossible to achieve (“Prevent users and devices from accessing malicious websites…”? Piece of cake! All you have to do is identify all of the “malicious websites” in the world, update the list minute-by-minute, and seamlessly block every URL. What could be simpler?). The only mitigation it doesn’t require is what would have actually prevented the Colonial Pipeline attack. It seems that wasn’t even considered (aka “The light is better here”).

Fortunately, I’m certain the order will never be implemented, since some consultation with the pipeline industry would have shown TSA that full compliance with the order would probably be beyond the means of any pipeline company, period; and in the end, no regulation that literally can’t be complied with will be allowed to stand. Which brings me back to the CMMC. That’s based on NIST 800-171. This document has many more requirements than the TSA order, although they’re much more – how can I say it? – sensible than the TSA requirements. However, NIST 800-171 shares with the TSA order the fact that it lists mitigations, not risks.

It also shares with the TSA order the fact that it doesn’t address the most important risk in the domain being addressed. NIST 800-171, which is a supply chain cybersecurity risk management standard, omits any mention of software supply chain cyber risks, which are without doubt the most important supply chain risks today (my guess is 800-171 would be very different if it had been written after SolarWinds and EO 14028).

Cybersecurity is inherently a risk management process, requiring three steps of the organization:

1.      Identification of the high-level risks to be mitigated - i.e. the cybersecurity “domains” being addressed;

2.      Identification of low-level risks included in each domain, that are applicable to the organization and the environment in which it operates; and

3.      Identification of appropriate mitigations for those risks – meaning appropriate for the organization and the environment in which it operates.

Any cybersecurity standard needs either to require that the entities being regulated take these steps, or – if whoever drafts the regulations doesn’t trust those entities – take them on its own, and simply require the entities to implement mitigations for the risks that the regulator has identified (i.e. what I and others call the prescriptive approach). The latter is the approach that both the TSA pipeline order and CMMC/NIST 800-171 take. It’s also the approach that most of the NERC CIP requirements that were written as part of CIP v5 take (e.g. CIP-007 R2 and CIP-010 R1. Fortunately, literally all of the CIP requirements and standards written after CIP v5 are risk-based, since the industry seems to have finally learned its lesson about prescriptive cybersecurity requirements).

The prescriptive approach would work great if

A.     Whoever wrote the requirements had perfect knowledge of all current and future risks in the domain being regulated – e.g. pipeline or Bulk Electric System operations;

B.     Those persons also had perfect knowledge of all current and future mitigations for those risks, and could choose the best ones;

C.      The entities being regulated are similar enough that the risks and mitigations that are appropriate for one will be substantially the same as for another (of course, we know well that’s true in the power industry, where there’s very little difference between say ConEd and a coop in the middle of Nebraska); and finally

D.     The requirements are written so that, taken as a whole, they won’t pose an undue burden on an organization of any size, given that I know of no organization that has an unlimited budget for cybersecurity mitigation.

Needless to say, no person and no single organization meets the first or second criteria, and very few if any groups of regulated entities meet the third criterion. As for the fourth criterion, it would also take a group of people with godlike powers of perception to draft such requirements. The problem is that people who aren’t gods will inevitably err on the side of over-regulation. They’ll list a requirement for everything they can think of that could be important, with no consideration of whether having to meet every requirement might literally bankrupt most of the organizations that have to comply with the requirements.

Then how should a cybersecurity regulation be written? I’m glad you asked that. Looking at the three steps listed above, I believe the regulation itself should accomplish the first step – that is, the regulation should identify the high-level risks to be mitigated. This at least gives the organizations being regulated a place to start, as opposed to telling them to identify risks starting with a blank piece of paper.

Then the organization being regulated should take the second two steps on their own, although with oversight (and advice) from the regulator:

2.      Identify low-level risks included in each domain, applicable to the organization and the environment in which it operates; and

3.      Identify appropriate mitigations for those risks – meaning appropriate for the organization and the environment in which it operates.

Are there any cybersecurity requirements or standards that require these three steps – and nothing more? I’m sure there are a few, but the closest requirement that I know of is NERC CIP-010 R4, the requirement for cybersecurity of “Transient Cyber Assets” (e.g. laptops) and “Removable Media” (e.g. USB drives) used temporarily at power facilities like substations. This requirement doesn’t actually mention risk at all, but it requires a plan that includes ten sections describing mitigations for specific risks like “Introduction of malicious code”, “Software vulnerabilities” and “Unauthorized use”. These are high-level security domains, for each of which the utility has to develop appropriate mitigations.

The requirement even suggests high-level mitigations in each domain, that the utility might decide to implement. And these mitigations almost always provide the option of “Other methods…” of the utility’s own choosing. However, if the utility decides to implement another method, they need to convince an auditor that the method they chose does as good a job of mitigating the risk in question as the examples listed in the requirement.

How about NERC CIP-013? That’s definitely a risk management standard, but it doesn’t list high-level risks. It just tells the utilities to identify supply chain cybersecurity risks on their own, without stating that they should consider domains like software security, manufacturing security, software vulnerability management, etc. Therefore, in my opinion, it doesn’t make the cut. CIP-013 also doesn’t require mitigations at all – although that was clearly due to a simple oversight by the drafting team.  

So I’m glad to see that DoD is going to revise the CMMC program, since I simply don’t see how it can be fully implemented as written. And I’m especially glad to see that they’re going to get input from the contractors who will be regulated (some of them, anyway), rather than trying once again to require them to address every cybersecurity requirement that they could think of, with no regard for what’s the best way for contractors to mitigate the most cybersecurity risk possible on a non-infinite budget.

Of course, I can’t blame the folks at DoD for thinking that other organizations have infinite budgets. If DoD says something is needed and asks for it passionately, they’ll get it. And if they keep asking for more and more and that results in higher costs, they’ll get the funds needed to meet those higher costs, too. Plus, if a bunch of nosy reporters ask about the reasons for those higher costs, the answer will be – of course – classified. Just look at the F-35.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. Nor are they shared by the CISA’s Software Component Transparency Initiative, for which I volunteer as co-leader of the Energy SBOM Proof of Concept. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

No comments:

Post a Comment