Tuesday, July 25, 2023

Is it time to abandon VEX?

Monday’s meeting of the CISA VEX working group led off with a rather disturbing exchange: the two leaders of the group – in obviously coordinated comments - both wondered whether VEX even belongs in the CISA SBOM effort. They asked whether it might be “better” to host it somewhere else. Where? They didn’t suggest any alternative venue. Fortunately, one of the most active participants in the group strongly objected to this idea and the leaders said they would be “happy” to continue the meetings (as in “I’ll be happy to take this medicine that I hate taking”).

But they also made it clear that CISA management is now asking questions about what they’re getting from their 19-month (so far) commitment to the VEX working group (the group started in 2020 under the NTIA and was the only NTIA workgroup that continued under CISA, although its exact status under CISA has never been clear). To be honest, if I were CISA, I would ask the same questions. Here’s a little history to explain what I mean:

1.      When the NTIA Software Component Transparency Initiative ended at the end of 2021, the VEX meetings continued under CISA without any interruption in January 2022.

2.      Under the NTIA, the group only produced a single one-page Introduction to VEX (which I drafted). It was a decent effort, but it was out of date by 2022 (e.g., it assumed there was just one VEX format: CSAF); plus, there’s only so much one can say in one page.

3.      Under CISA, the group immediately produced two well-written and important documents on VEX in the spring of 2022: VEX Use Cases and VEX Status Justifications.

4.      Last summer, the group entered a period of wandering in the wilderness, which this year produced two documents that add just about nothing to our understanding of VEX: VEX Minimum Requirements and the close-to-being-finalized When to issue VEX information. Both of these are classic examples of a “document for the sake of having a document”. While neither one contains untruths or misrepresentations (except for the title “Minimum Requirements”, since nothing in that document is “required” by anything other than the document itself), there is nothing I can point to in either document that wasn’t already obvious or wasn’t stated better in one of the 2022 documents.

Given the above, I can certainly understand why CISA is concerned that the investment they’ve made in the VEX working group (primarily in the form of employee time) during at least the past 12 months has been unproductive. So, why do I think this group shouldn’t be discontinued?

That’s simple: The VEX effort started because several huge software and intelligent device suppliers had realized, early in the NTIA SBOM effort, that distributing SBOMs to their customers regularly would inevitably lead to their help desks being overwhelmed with calls from customers about “false positive” component vulnerabilities. This is because more than 90% (some say more than 95%) of component vulnerabilities – which a customer can identify by running an SBOM through a tool like Dependency-Track – are not exploitable in the product, even though they are exploitable in the component when it is considered to be a standalone product. Not only would these calls overwhelm their help desks, but they would be a source of endless (and unnecessary) frustration, both to customers and to help desk personnel.

It gets down to this: Software users won’t be interested in receiving SBOMs if they know that a) over 90% of the component vulnerabilities they learn about will be false positives, yet b) they won’t know how to distinguish these from the ten percent that are real; they will just want to learn about the real ones. And suppliers won’t be interested in providing SBOMs to their customers unless there’s some way they can let them know, in an automation-friendly fashion, which component vulnerabilities are exploitable (along with information on downloading a patch for each, or an upgrade path to a patched version) and which aren’t exploitable.

A recent meeting of the Healthcare SBOM Proof of Concept (which started under the NTIA but is now sponsored by the Healthcare ISAC and the Healthcare Sector Coordinating Council) produced these two relevant statements, one from a large healthcare delivery organization (HDO) – aka hospital chain - and one from a very large medical device maker (MDM):

1.      The HDO said they had recently looked up vulnerabilities for the software components identified in the SBOM for a medical device they use and found 2,000 vulnerabilities in that one device! Of course, at most 200 of these are exploitable, and the actual number of exploitable vulnerabilities is probably less than 100. But knowing this is cold comfort if the user organization has no way of knowing which of the 2,000 vulnerabilities are exploitable and which aren’t. They made it clear: “We need VEX!”

2.      The MDM said they won’t issue SBOMs until they can also issue VEXes – due no doubt to the fear of being overwhelmed with false positive help desk calls. More importantly, they need to issue VEXes with the knowledge that they will be used by the HDO in an automated fashion to “winnow down” the list of component vulnerabilities to only those that are exploitable. Obviously, if a user organization identifies 2,000 component vulnerabilities in a single device, they will need a tool that will ingest VEXes (as well as SBOMs) and do the winnowing for them. Currently, no such tools exist, at least with commercial support, which is required by most non-developer organizations.

There you have it: Suppliers won’t be interested in distributing SBOMs to their customers, and software users (meaning organizations whose primary function is not developing software) won’t be interested in using SBOMs to learn about software component vulnerabilities, unless VEXes are made available and can be used effectively, in an automated fashion.

Absent VEX, SBOMs will go nowhere. This is why we need VEX.

Tom's note 7/31: Allan Friedman announced in the VEX working group meeting today that CISA has no intention of abandoning VEX, so that's good news. 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Thursday, July 20, 2023

The IoT device cybersecurity program is here (warts and all)!

 

Yesterday, the White House announced the long awaited cybersecurity labeling program for IoT devices, called Cyber Trust Mark. I had written about this in my post in May, as well as in previous posts. This program was mandated by Executive Order 14028, issued in May 2021. It is a voluntary program, which is expected to be “up and running” in 2024.

There were several surprises in the announcement, that went beyond what had been made public before:

1.      The agency that will implement the program will be the Federal Communications Commission (FCC), not the Federal Trade Commission (FTC), as I had expected (as well as some others). I hope this has been thought through, since the FCC’s experience with consumer products has mostly to do with technical communications standards and not cybersecurity. The FTC, since one of its roles is enforcing commitments consumer products companies make regarding privacy of personal information they hold, has already done a lot of work in the cybersecurity area.

2.      “NIST will also immediately undertake an effort to define cybersecurity requirements for consumer-grade routers…” Of course, routers are an IoT product, and there has been a lot of concern (mostly justified) about their security in the last couple of years. It’s certainly a good idea to require a higher standard of cybersecurity for routers. NIST is obligated to draft requirements for routers by the end of 2023.

3.      The US Dept. of Energy announced a project to work with the National Labs to develop cyber labeling requirements for smart meters and power inverters. Both of these are devices that are installed in (or on) homes, although they have lots of applications in industrial and commercial facilities (in more industrial-strength incarnations, to be sure).

However, there were some negative surprises as well. The biggest was that the labeling program, after being initially discussed as a way to communicate to consumers the degree to which an IoT product met certain criteria for cybersecurity has now become a way for “..Americans to confidently identify which internet and Bluetooth-connected devices are cybersecure”, according to Anne Neuberger, the deputy national security adviser for cyber and emerging technology at the National Security Council.

Note that the label was originally not intended to provide an up-or-down judgment on a product, but instead just point out areas of strength and weakness, allowing the consumer to determine for themselves whether it’s safe to buy. However, now it seems the White House has come up with a Roman emperor-style thumbs up-thumbs down label, which is based on some extraordinary insights they have into what constitutes a cybersecure product; I certainly hope they’ll share those insights with the rest of us, especially the many manufacturers who think they’re doing a good job on security, only to get a thumbs down when the labels are awarded.

More importantly, the document supposedly defers to NIST’s judgment on what should be in a cybersecurity framework for IoT. Even though it’s not mentioned in the announcement, NIST came out with what seems to me to be an excellent IoT cyber framework, NIST.IR.8425, last year. I wrote about it in a blog post for my French client Red Alert Labs in November; RAL works with manufacturers to secure and certify IoT devices.

However, like all NIST frameworks, exactly which provisions an organization complies with and how they comply with them are up to the organization, based on their assessment of their own risk environment. This is not compatible with an airy statement that a product is “cybersecure” or not.

I’m going to choose to believe that Ms. Neuberger’s statement was a bit of hyperbole inserted in her address by an over-zealous intern, who seems to think that the device labeling program is a game-changing innovation, when in fact it’s an initial step on a long journey towards an unattainable goal that might be called “complete cybersecurity of IoT devices”. But if it’s not, and the FCC decides to go through with the binary label idea, I’m sure that will fail. What manufacturer is going to meekly tuck their tail between their legs and walk away after being told they’re not being awarded the label because they didn’t meet some undefined criteria, while their competitor down the street received the label? They’re going to raise h___ and rightly so, in my opinion.

However, if I found Ms. Neuberger’s statement amusing, another statement (which appeared in the NextGov/FCW article on the announcement without attribution, although I heard it said by another source as well) chilled my blood: “The administration is also working with the Department of Justice to develop liability protocols for manufacturers working with the Cyber Trust labeling program.”

Yes, boys and girls, this is our good friend the Liability Monster, once more rearing its head after making an initial appearance in the National Cybersecurity Strategy last March. I thought I might have succeeded in driving a wooden stake through its heart in my 4 or 5 posts on the subject, but I must have grabbed a plastic stake instead.

It seems that some people in the White House have come to believe that our nation’s cybersecurity problems are far too pressing for us to deal with them through regulation (as everybody knows, regulations take a lonnng time to develop and implement) or through the court system (after all, you never can be sure how a judge or jury will rule on liability for a breach. It’s much better to ensure the outcome before the trial starts).

Instead, we need to place our finger on the scales of justice from the beginning and ensure that the parties we all know are responsible for cyber breaches – the developers of the software or manufacturers of the devices – are assumed to be liable right from the start. This will substantially reduce the workload of some very busy people in the White House and assure that they might even be home for dinner every now and then. What higher goal can there be than that?

Just like the thumbs up-thumbs down idea, the idea that the Department of Justice is even going to consider amending the centuries-old principle that liability is determined by a judge and/or jury in a court of law, instead of being determined by someone in the White House, will fail. I just wish people who should know better didn’t waste so much time on these pursuits. The IoT device labeling program needs to be treated seriously, since it has an important role to play in national security.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Saturday, July 15, 2023

Back on the road to Damascus

 

Almost on the last day of 2021, I wrote a post in which I stated my newfound belief (after a “road to Damascus” experience, although unlike St. Paul I didn’t hear the voice of God speaking to me) that suppliers should be responsible for performing the analysis of SBOMs and VEX documents, in order to produce a continually-updated (at least daily) list of exploitable component vulnerabilities in a particular product/version. Even though suppliers may choose to work with third-party service providers to perform this service (since the service providers can amortize the cost of their services across a large user base), it should be their responsibility, and they should pay for the service provider.

I’ve repeated that belief at various times since then, but I’ll admit that I’ve often forgotten about it, and spoken as if this analysis is really the responsibility of software end users. While that may be true in the short run, I anticipate that in maybe 5-10 years, the party universally believed to be responsible for analysis of SBOMs will be the supplier.

While there are several compelling reasons why suppliers should bear this responsibility, here’s the most compelling: 

In order for end users to be able to make use of SBOM and VEX data to manage component vulnerabilities in the software products they utilize, somebody's tool needs to ingest an SBOM, look up component vulnerabilities in the NVD or another vulnerability database, and ingest VEX information to learn how the supplier views the status of each of those component vulnerabilities. It makes no sense to force thousands, tens of thousands or even millions of customers to perform exactly the same set of steps that the supplier could perform on their own, and just distribute the results to their customers.

For example, suppose a software product has 10,000 users, all of whom are concerned about managing vulnerabilities due to components in the product. Let’s say there are low cost, easy-to-use, commercially supported tools available that will perform the required analysis, so the cost of tooling is not an important factor here (hey, a guy can fantasize, can’t he?). And assume the users have all been utilizing these tools for a long time, so they don’t need to “learn on the job” while performing this analysis.

Now, let’s suppose that performing the required analysis across the useful life of the software requires five hours of time for a single version of the product. If all 10,000 users do this, the total cost to them will be 50,000 hours. Ideally, if they all work with the same information from the supplier (i.e. both the SBOM and the VEX documents), they will all end up with exactly the same results for this product: a list of exploitable component vulnerabilities in the product and version, which is updated daily to reflect new vulnerabilities found in a major vulnerability database and new VEX documents received from the supplier.

Now, let’s suppose the supplier performs this same analysis themselves, using the same tool as their customers do (in fact, in the author’s opinion, the supplier would be negligent if they weren’t performing this analysis themselves, at least daily and perhaps more often). They will also spend five hours on this and achieve the same results as each of their 10,000 customers.

Here’s the hard question: Which is more, 50,000 hours or five hours? You’re correct! 50,000 hours is more than five hours.

Now, here is the even harder question: How could it ever make sense for a supplier to require each of its customers to perform an analysis that the supplier could perform just as cheaply on its own, while simply distributing the results to their customers (probably in a customer portal, so they don’t have to push any documents out)?

And here’s the answer to that even harder question: It probably never makes sense for a supplier not to do this analysis themselves and make the results available to their customers. So why is everybody in the (expired) NTIA and CISA SBOM initiatives talking as if the supplier’s only responsibility is to toss the SBOM over the wall to their customer and let the customer figure out what to do from there?

Beats me.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Tuesday, July 11, 2023

Excuse me if I don’t get excited about the FDA’s new SBOM “regulation”

The phrase that seemed to be on everybody’s lips when the FDA finally received authority to regulate cybersecurity in medical devices, which was granted in the Omnibus spending bill at the end of 2022 but had been specified in an earlier targeted bill called the PATCH Act, was “game changer”. This was because one of the likely consequences of the FDA’s new authority (it wasn’t directly mandated by the bill itself) is that they will require medical device makers (MDMs, in industry parlance) to provide a software bill of materials (SBOM) with their “pre-market submissions”.

The latter refers to a package of documentation that – if the MDM has done their homework correctly – will assure the FDA that the device the MDM seeks permission to market to hospitals or other end users is both safe (which has always been a criterion for approval) and cybersecure (which is a new criterion, due to passage of the Omnibus Bill). This provision (for a cybersecurity review, not just an SBOM) came into effect at the end of March (I believe), but the FDA said they won’t enforce it until October 1 – although they’re requiring submission of an SBOM now, and they’ll have a discussion with the MDM about any shortcomings they find.

What will actually be required come October 1? The MDM will be required to submit a single SBOM for their device. It will be scrutinized as part of the review of the entire submission, although no criteria have yet been stated for what will be considered an acceptable SBOM. Most importantly, the SBOM will never be shown to any person or organization outside of the FDA, including any customers or potential customers of the device.

Folks, this is the big “first SBOM regulation” that everybody is so excited about! Of course, it’s hard to see how anybody would get excited about just that. The reason people even use the phrase “game changer” is because the FDA hasn’t released any guidelines for what should be in an SBOM, how often it should be released, who should receive it, etc. A lot of people, most of whom have an economic stake in MDMs being forced to utilize (insert name of startup services vendor or startup cybersecurity tool vendor here) to help them produce and distribute SBOMs, have worked mightily at convincing themselves that this one small step will inevitably lead within months (or at least before their seed funding runs out) to industries of all stripes facing onerous regulations that will cause them to start banging on their door, open checkbooks in hand, begging to be allowed to buy their product or services. A true “game changer”.

I don’t deny that these people have achieved a lot of success – in convincing themselves of this quite dubious proposition. However, I have no idea what game they’re talking about changing, unless it’s TiddlyWinks. I’ve been working in the cyber regulation field – specifically in NERC CIP, which is no game at all – for a long time, and I’ve noticed one funny thing about regulation: The organizations that are being regulated and face onerous fines for violations don’t take kindly to being told to comply with a list of “requirements” that are poorly worded, based on ambiguous terms, make assumptions that appear to be taken from The Chronicles of Narnia, etc.[i] They tend to push back and demand clarification or wholesale rewriting of any requirement that’s ambiguous or misconceived. And if the agency that imposed the requirements pushes forward and implements the objectionable regulations, any penalties they levy are likely to be immediately reversed by highly skeptical judges, who will issue strongly worded opinions suggesting that perhaps whoever drafted those regulations should consider a career change to shoe sales.

In other words, if any game at all is going to be changed on October 1, whatever additional “requirements” are imposed by the FDA’s guidelines to be issued in September will need to be clear, practical and based on an understanding of what is in fact possible as of October 1, 2023. And they cannot be based on what some person wishes were in place, without being very concerned with whether that’s actually the case.

Moreover, the fact that the September guidelines won’t be requirements raises the question whether they will have any impact at all. However, I don’t deny that an agency with as much power over MDMs as the FDA has will probably meet with a remarkable “compliance” rate with these “guidelines”. After all, the executives of MDMs are (presumably) well paid to develop a keen understanding regarding on which side their bread is buttered.

What about other industries? Will they immediately start to require SBOMs from their suppliers? Perhaps. As long as there’s a federal agency that has the authority today to impose mandatory cybersecurity (not just safety) requirements on vendors to the industry. And how many industries are blessed (?) with such an agency today? As I discussed in this blog post, the only industries that I know of in which a federal agency very likely has that power now are nuclear power and the military. For any other industry, “changing the game” will first require putting in place an agency that will be staffed by experts (both in cybersecurity in general and in the specific circumstances of the industry in question), that will be granted all the authority required to enforce whatever regulations they determine to be necessary, and will act without any hint of partisan game-playing, as well as…hello, did somebody just cut the connection?

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Note that I’m not inferring here that the NERC CIP requirements are overly ambiguous, unrealistic, or anything like that. The fact that the CIP standards, like all NERC reliability standards, are drafted over a period of literally years by teams composed of subject matter experts from the utilities and other entities being regulated, that they’re submitted to a series of votes – almost always at least four - by all NERC members (which can include the general public and other non-participants in the industry), and that they’re reviewed scrupulously by the Federal Energy Regulatory Commission (FERC) before they’re approved and implemented, means they will never be poorly thought out, whatever other problems they may have.

Friday, July 7, 2023

Will NERC CIP Medium and High impact systems ever be allowed in the cloud?


To my many readers who don’t know much if anything about NERC CIP, I’ll explain the title of this post: A large number of important systems that operate the North American power grid are indirectly forbidden from being located in the cloud, because of the nature of the current NERC CIP requirements – which no cloud provider could ever comply with. 

It seems that every few years, there’s a lot of talk about moving Medium and High impact BES Cyber Systems (BCS) into the cloud. The reason that topic keeps coming up is it would undoubtedly be much more efficient and cost-effective to do that, and probably more secure, not less. However, the problem is also that any utility that did that wouldn’t be able to prove CIP compliance with a huge percentage of the CIP requirements.

I have believed for a long time that medium and high BCS will never be allowed in the cloud until the whole CIP compliance regime, which is now based on compliance for individual cyber assets (both physical and virtual) changes to being based on compliance for systems (i.e., BCS needs to be the foundation of CIP compliance, not BCA).

Ironically, the CIP Modifications drafting team outlined almost exactly that approach in 2018 in one or two webinars and started to work on redrafting the CIP requirements as needed to implement that approach. However, it seems that effort got set aside, perhaps because a lot of NERC entities have a substantial investment in compliance with the CIP standards as they are – which they’re reluctant to throw away (I might well agree with them, were I in their shoes). The team then turned to the more conventional approach to virtualization (i.e., basing it on cyber assets, but virtual ones as well as physical), which it continues to pursue today. See this post for a description of the unhappy 2018 experience.

There is a proposal being circulated now that would essentially create a “parallel CIP” for medium and high BCS that are in the cloud (lows can be there now without a problem, as far as I know). Of course, compliance will be much easier for the NERC entities that pursue that approach, since their evidence in all cases will probably be the CSP’s FedRAMP certification. The inevitable result of implementing this will be that all medium and high impact Control Centers (and perhaps even parts of substations) will be moved to the cloud, if it’s at all possible for the utilities to do that. Obviously, that might make the grid more vulnerable to cyberattack, not less.

Two of my posts from 2021 flesh this out: from August and November.

I don’t see any way to change this situation, other than starting to contemplate wholesale changes to NERC CIP. But I currently doubt there’s the will to do that.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.