Monday, January 1, 2024

NERC CIP: The two faces of BCSI in the cloud


It’s well known in the NERC CIP community that as of today, January 1, BES Cyber System Information (BCSI) will finally be “legal” in the cloud. However, it isn’t as well-known (but it will be soon, I’m sure) that a huge glitch has appeared that may well take away – at least for now - the majority of the benefits the NERC community was expecting to derive from this development. And it will surprise nobody who has been working in the NERC CIP area for a long time that this isn’t due to some big, fundamental problem like the cloud suddenly being found to be insecure. As usual, the problem has little to do with security and everything to do with the wording of one of the requirements, which as usual has produced consequences completely unintended by the members of the drafting team that wrote the requirement.

I’ve only seen one mention of this problem in writing – that’s in this post I wrote in early December (the author told me he stands by what he said in that post). Since I managed to bury the discussion of this problem far down in the post, I’ll elaborate on it now: The problem with BCSI not being allowed in the cloud before today was never due to a question of where NERC entities can store BCSI. If that were the only problem, few NERC entities would have cared that BCSI wasn’t allowed in the cloud. After all, there are lots of secure – and very inexpensive - ways to store BCSI on premises.

The real problem has always been that the effective “ban” on BCSI (which was itself caused by three words in previous versions of CIP-004: “physical storage locations”) turned out also to be an effective “ban” on software-as-a-service (SaaS) in the cloud – and that problem is getting worse all the time. There are lots of cloud applications that NERC entities with medium or high impact BES Cyber Systems would like to utilize, that are more functional and/or less expensive in their cloud incarnations. These cloud applications have been considered off limits until today.

Even worse, many applications are now not available at all (or won’t be soon) anywhere but the cloud. More and more notices are going out from software suppliers that they will:

1.      Freeze development of their on-premises version at the current version and support all future functionality upgrades only in the cloud; or

2.      Offer certain features only in the cloud from now on; or (worst of all)

3.      Phase out their on-premises version altogether over the next year, and only offer the software in the cloud after that.

The reason why use of SaaS was verboten in medium and high impact NERC CIP environments wasn’t that SaaS apps somehow met the definition of BES Cyber System or Electronic Access Control or Monitoring System; they can't do so, because otherwise they would need to be installed within a Physical Security Perimeter and an Electronic Security Perimeter. Of course, that would currently be impossible for cloud-based systems.

Making BCS and EACMS “legal” will require a fundamental rewrite of many or most of the current CIP requirements (fortunately, that process finally took a baby step forward in December, but it will probably be at least three years before those rewritten requirements come into effect). But all along, it has been thought/hoped that the BCSI and SaaS problems would be completely solved by the fairly simple revisions found in CIP-010-3 R1 and R2, as well as the inclusion of R6 in CIP-004-7.

The only reason why use of SaaS in medium and high impact CIP environments hasn’t been allowed in the cloud until today has simply been the fact that if the SaaS app needed to use BCSI, it couldn’t have access to it until today. In other words, the real reason why FERC’s 2022 approval of CIP-004-7 and CIP-011-3 was significant wasn’t that BCSI itself would be “allowed” in the cloud, but that apps that needed to use BCSI would finally be able to operate in the cloud.

However, I heard in late November that some CIP auditors were becoming concerned with the wording of the new CIP-004-7 R6, and specifically the wording of these two sentences in the first paragraph of R6:

To be considered access to BCSI in the context of this requirement, an individual has both the ability to obtain and use BCSI. Provisioned access is to be considered the result of the specific actions taken to provide an individual(s) the means to access BCSI (e.g., may include physical keys or access cards, user accounts and associated rights and privileges, encryption keys).

The three Parts of R6 - R6.1, R6.2 and R6.3 – describe actions that a NERC entity now needs to take with respect to any individual that has “provisioned access” to BCSI. “Provisioned access” means the individual can obtain and use BCSI; with encrypted data, that usually means the individual would have to have access to the encryption keys (without the keys, the individual could obtain the BCSI but not use it). Such individuals need to:

1.      Per R6.1, be specifically authorized by the NERC entity to have provisioned access to the entity’s BCSI; and

2.      Per R6.2, be subject to verification every 15 months that they have an authorization record, and they still need provisioned access to BCSI to do their job; and

3.      Per R6.3, have their “ability to use provisioned access to BCSI” removed by the end of the next calendar day after a “termination action” (which could be just the individual’s resignation or being transferred internally. It doesn’t have to mean the person was fired).

It was always understood that, starting today, if a CSP employee that manages a SaaS app were to need provisioned access to BCSI, they would be subject to these three requirement parts. In turn, this would mean the CSP would have to furnish evidence to the NERC entity that these actions were taken in every instance. It was acknowledged that at least some CSPs would refuse to even consider doing this. But since it seemed unlikely that CSP employees would ever need provisioned access to BCSI used by a SaaS app, NERC entities and SaaS providers were not worried about R6.

However (cue the ominous music), it seems somebody within the NERC ERO realized that a SaaS app won’t be able to operate on BCSI if it is encrypted, since the app itself can’t decrypt the data. Thus, some person who works for the CSP will need access to the encryption keys for the BCSI for the short time that it takes to decrypt the BCSI and feed it into the app. During this short time, that individual could conceivably copy the BCSI to a thumb drive or some other nefarious device. Therefore, that person will be subject to CIP-004-7 R6.1, R6.2 and 6.3; the CSP will need to provide the NERC entity evidence that all three of these Requirement Parts were complied with in all instances.

R6.2 and R6.3 probably don’t pose a big problem for most CSPs, since the CSP likely already has pre-formatted reports showing their policy regarding key access, as well as evidence showing that it has been followed in all instances where BCSI was involved. However, R6.1 is a different story, because it requires the utility to authorize certain individuals by name (not just role), in order for them to be granted provisioned access to BCSI. At least some CSPs may have a problem with this.

If you haven’t been steeped in NERC CIP for a long time and/or if you have experience complying with any other cybersecurity regulations, you might wonder why this is a problem at all. For one thing, it’s very unlikely that a CSP employee would have any interest in obtaining BCSI; it can’t be sold in some online marketplace and it’s also not at all likely that a CSP employee would be able to utilize the BCSI for any attack on critical infrastructure, no matter how much malice they harbored in their evil heart.

And even if you stipulate that this is really a threat, why should this pose a compliance problem? Couldn’t the NERC entity just give the CSP permission to allow a certain small set of employees (but not specifically named individuals, as specified in the strict language of the requirement) access to the encryption keys for BCSI, only for the few and short occasions when one of them needs to decrypt BCSI to feed it into the SaaS app?

Most importantly, are these two considerations by themselves so serious that NERC entities with high and/or medium impact BES Cyber Systems must continue to be prevented from utilizing SaaS apps that require access to BCSI? The answer to this question seems clear: No. Yet, I’ve heard this is exactly what SaaS providers and NERC entities have been told. Of course, if the CSP would be willing to promise to provide the documentation necessary for the entity to comply with R6.1, R6.2 and R6.3, there would be a workaround for this problem. But I’ve heard that so far, there are no CSPs who have agreed to do this; at the moment, this is a “showstopper” problem.

I want to point out that, since it will be many months before audits start for CIP-004-7 and CIP-010-3, it is possible that some workaround will be found to this problem, which the NERC auditors are likely to agree to – although my guess is at least some NERC entities with high or medium BCS won’t even try to use SaaS until they’re comfortable that such a workaround is available. That will probably require someone at NERC being willing to bend the rules in some way – although that will never be acknowledged to be the case, of course.

“Surely,” you might wonder, “…a way can be found to fix this problem temporarily, at least until a Standards Drafting Team can fix it for good.” I’ve wondered the same thing. Now, we have to all wonder out loud. The message needs to be heard.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

Wednesday, December 27, 2023

Is this the best approach to device cybersecurity regulation?


For a long time, I have been pooh-poohing the idea that the best way to promote good cybersecurity practices (including use of software bills of materials, but in no way limited to that) is to regulate those practices into existence, if they’re not already being followed. In other words, trying to beat somebody over the head until they finally start doing what you want them to do really doesn’t help the cause at all. It’s only when existing practices threaten to get out of hand and cause damage that there is a reason to regulate a new industry (e.g., AI).

However, when I and probably most of you think of regulation, it almost always has a negative function: to deter certain behavior with the threat of penalties. But there’s another type of regulation that’s followed much less often, despite (or perhaps because of) the fact that it has a positive function: to incentivize certain positive behaviors, rather than disincentivize negative behaviors. In other words, it’s the carrot approach to regulation, not the stick approach.

When I think of positive approaches, I usually think of my elementary school classes, where I would get to wear a gold star on my forehead for turning in my homework on time or getting all the answers correct on a spelling test. I usually don’t think of a positive approach as something that will work with adults; most of us (perhaps based on our perception of ourselves) think that the only way to motivate adults to do the right thing is to make sure it’s too painful/expensive/humiliating/etc. to do the wrong thing.

However, in cases where negative regulation isn’t likely to work, such as in encouraging practices that aren’t being followed much today, perhaps this is really the best way to accomplish that goal.

This is probably why I’ve come to like the idea of a device cybersecurity labeling program, which I at first thought was a strange way to encourage good cybersecurity practices. Most importantly, the US is going to test that idea starting in 2025, when the Federal Communications Commission’s (FCC) “Cyber Trust Mark” program will come into effect. Here is a brief history of how this program came to be:

Executive Order (EO) 14028, “Improving the Nation’s Cybersecurity”, was issued in May, 2021. Section 4(t) on page 19 reads:

Within 270 days of the date of this order, the Secretary of Commerce acting through the Director of NIST, in coordination with the Chair of the Federal Trade Commission (FTC) and representatives of other agencies as the Director of NIST deems appropriate, shall identify IoT cybersecurity criteria for a consumer labeling program, and shall consider whether such a consumer labeling program may be operated in conjunction with or modeled after any similar existing government programs consistent with applicable law. The criteria shall reflect increasingly comprehensive levels of testing and assessment that a product may have undergone and shall use or be compatible with existing labeling schemes that manufacturers use to inform consumers about the security of their products. The Director of NIST shall examine all relevant information, labeling, and incentive programs and employ best practices. This review shall focus on ease of use for consumers and a determination of what measures can be taken to maximize manufacturer participation.

Of course, this paragraph doesn’t explain where the idea of “device labeling” came from, but it’s an idea for regulation of devices that has been around for a long time. In the US, the EnergyStar program (which is ongoing) has been quite successful in encouraging consumers to use more energy efficient appliances. And in other countries, there have been device cybersecurity labeling programs in Finland, Germany and Singapore that are considered successes, although they all had fairly modest goals. More importantly, in Europe the idea of testing devices (along with assessing the manufacturer’s security practices) and issuing a certification of the device (i.e., a “label”) is much more prevalent; there are organizations (“testing labs”) that provide this testing and issue labels, including a French client of mine, Red Alert Labs.

Note that the EO didn’t mandate any particular program for labeling, since the drafters of this section had enough wisdom not to think they could figure everything out on their own. Instead, they ordered NIST to do two things:

1.      Design “criteria” for the label – i.e., the cybersecurity framework against which it would be tested. Of course, NIST has been developing cybersecurity frameworks for many years and in many different areas of security. Asking them to do this was like asking a fish to swim. They came out with what I believe is an excellent framework, NISTIR 8425.

2.      “…consider whether such a consumer labeling program may be operated in conjunction with or modeled after any similar existing government programs consistent with applicable law.” Note that the EO didn’t order NIST to develop the labeling program, but just to consider whether there are any existing programs it can be modeled after. However, this didn’t prevent NIST from trying their hand at designing the program. The result proved that...well, NIST is very good at designing cybersecurity frameworks, and perhaps they should stick to that calling. Sometime in early 2922, they seem to have abandoned the idea of designing the program, and decided just to concentrate on the framework.

When NIST introduced NISTIR 8425 in the fall of 2022, they didn’t even mention a labeling program with it; they simply presented it as a framework for securing IoT devices. While they said it was for “consumer” devices – which is of course what the EO had ordered they address – they made it clear in the NISTIR (see the blog post linked above) that this could apply equally well to business devices; in other words, “Don’t look for us to come out with a separate framework for business devices. This one is good for both.” This doesn’t mean there won’t be more advanced frameworks in the future for particular types of business devices, etc. But it does mean that NISTIR 8425 can form a good foundation for both consumer and business devices, which I agree with.

From the fall of 2022 until the spring of this year, there was little hard news about the device labeling program. There were one or two White House meetings on securing consumer devices, which included an announcement that a labeling program would be announced by the summer of 2023. Sure enough, this July, the announcement came out.

As was inevitable (given that NIST’s suggestions for the labeling program itself weren’t what was needed), the announcement didn’t provide details on the program. But it did announce the agency that would run the program, which was a surprise to me and others. Since the EO had mentioned the Federal Trade Commission and since the FTC has a lot of experience regulating privacy practices in consumer devices, it seemed very likely the FTC would be chosen to run this program.

However, it turned out to be the Federal Communications Commission (FCC) that was anointed to run the labeling program. While this was a head-scratcher for me, I kept an open mind and waited to see what the FCC would come up with. They came out with a Notice of Proposed Rulemaking in August. While I didn’t consider this to be a work of unsurpassed brilliance, I appreciated – once again – that it showed humility and made clear that the FCC wants to learn from IoT manufacturers and users on the best way to put together a device labeling program. The FCC scheduled a comment period, which expired several months ago. It’s currently expected that they’ll mull over the comments they received, and come out with a new document (perhaps another NOPR?) in January 2024.

While nothing in the NOPR should be considered as the final word on the labeling program, here are some of the most important points it made:

1.      The Introduction to the NOPR includes this sentence: “We anticipate that devices or products bearing the Commission’s cybersecurity label will be valued by consumers, particularly by those who may otherwise have difficulty determining whether a product they are thinking of buying meets basic security standards.” This is a great statement of the “carrot” approach to regulation: “Mr/Ms device manufacturer, you’re free to ignore the cybersecurity labeling program altogether if you don’t see any value in it. However, if you do see value in it and you decide to get the label, we expect you will have greater success selling to consumers than will your competitors who don’t have the label. The choice is yours.” My former economics professor Milton Friedman couldn’t have said this any better.

2.      I want to point out that this statement, and indeed the whole tone of the NOPR, is in marked contrast to a statement of Anne Neuberger of the National Security Council at the July announcement. She said that the labeling program would become a way for “..Americans to confidently identify which internet and Bluetooth-connected devices are cybersecure”. Unfortunately, Ms. Neuberger doesn’t seem to have gotten the memo that said a labeling program needs to provide a positive incentive to follow the path of (cyber) virtue, not require public shaming (perhaps the stocks?) for those who don’t want to obtain the label, whatever their reasons. She also didn’t see the memo that said there’s no way the federal government, the Vatican, the NSC, or any other entity can determine whether a product is “cybersecure”. Every product is cybersecure until it gets hacked, at which point it isn’t. The best that mere mortals can say on this issue is that the device manufacturer has practices in place that should normally be conducive to good security, and we hope they’ll keep them up.

Besides doing some things right, the NOPR does make (or not make) some statements that concern me, although it’s possible they will change in the document likely to be released in January:

1.      Like the criteria in almost every NIST framework, the criteria in NISTIR 8425 are risk-based (although NIST’s term for that is “outcomes-based”). That is, the criteria set out a risk management goal that the organization being assessed (which in most cases is a federal agency) needs to aim for, not a set of rigidly defined actions that the organization must take or be put to death. However, the FCC doesn’t seem to get this idea, and instead proposes that 8425 be “supplemented” with a set of measurable criteria, since, of course, those are the only criteria that can be audited in an up-or-down manner. It’s the old “Give me something I can measure. I don’t care whether it’s relevant or not” attitude. Fortunately, I think this objection isn’t likely to be sustained.

2.      One of the points I objected to in NIST’s 2021 “discussion draft” on the device labeling program was that, after saying they wanted the criteria for the program to be risk-based, they then contradicted that by saying they wanted a binary (i.e. up-or-down) label. I pointed out in my 2021 post on NIST’s document that proposing a binary label with risk-based criteria is inherently contradictory. Either the risk-based criteria need to go (which would be terrible) or the binary label needs to go. NIST also considered the option of an “informative” label – which doesn’t provide a judgment on the product but points out what can be learned from the assessment (e.g., the manufacturer implements good access controls, but isn’t so great at securing communications with other devices). However, they rejected that idea in 2021.

3.     The problem with the binary label is that it contains no useful information, other than that somebody somehow summarized the manufacturer’s response to the 12 or so questions in NISTIR 8425, and then decided that the manufacturer was good/no good. Why not let the consumer see how the manufacturer answered the questions and then make up their own mind on where the risks lie with this product? Why try to make a decision like that for the consumer? It's like telling them that they really need to buy a blue car. The FCC doesn’t seem to be aware of this contradiction, but my guess is it will become quite obvious later that a binary label doesn’t make any sense, when the criteria for the label are risk-based, as in NISTIR 8425.

4.      The FCC left out what I think is the most important criterion that device manufacturers should follow. Of course, it isn’t surprising that they left it out, since I’m the only person that I know of who is advocating for this. As described in this post, I believe that very few intelligent device manufacturers are reporting vulnerabilities in their devices to CVE.org (which is how they get into the NVD), as at least the larger software suppliers routinely do. The fact is that, since almost all vulnerabilities in software products are only known because the supplier itself reported them, a lot of devices will appear to have an “all clear” (i.e., no vulnerabilities) from the NVD, even though they may in fact have many thousands of vulnerabilities.

5.      The NOPR talks about having an IoT device registry that will list, among other things, all outstanding unpatched vulnerabilities found in a device. The FCC obviously thinks this will ensure the manufacturers have better security, when in fact it will only ensure one thing: that the manufacturers won’t report vulnerabilities at all, which seems to be the rule nowadays. It’s literally a waste of time to require manufacturers to clean up their vulnerabilities, when all you’ll do is ensure they won’t list any vulnerabilities that need to be cleaned up in the first place. This isn’t an easy problem to address, but it needs to be addressed before there’s a big focus on fixing vulnerabilities, whether it’s in the FCC’s device labeling program, the EU’s Cyber Resilience Act, or any other regulation.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

 

Monday, December 18, 2023

This is really sad news


In the immediate aftermath of the SolarWinds attacks being announced in December 2020, I wrote a post based on a New York Times article I’d just read (although the day I wrote the post, January 6, 2021, turned out to be in the news for another reason). The article intimated that V. Putin & Co. had pulled off another audacious supply chain attack; it was supposedly achieved by compromising a software development product called TeamCity, that is sold by the company JetBrains. That company was founded by three Russian software developers in Prague; however, it still has operations in Russia.

In my post, I unfortunately stated that it was possible that the SolarWinds attack had been perpetrated by the Russians, working through a compromised copy of JetBrains in use by SolarWinds (which is a user of JetBrains, along with many other software developers such as Siemens, Google, Hewlett-Packard and Citibank). That hadn’t been explicitly stated in the NYT article, and I was remiss for not reading it carefully enough.

Two weeks later (and a few days after I’d put up another post that made the same suggestion), I received a politely worded email from a person in Moscow who represents JetBrains. They pointed out that there was no evidence that TeamCity had been the launch point for the SolarWinds attack and asked that I apologize. Of course, I apologized in my post.

However, yesterday, almost three years after that exchange, I was very disappointed to learn that what I mistakenly stated three years ago has now come to pass: JetBrains instances have been compromised recently, most likely to launch supply chain attacks on customers of JetBrains’ software developer customers (which would presumably follow something like the model of the SolarWinds attacks). The Russian Foreign Intelligence Service (SVR) is now exploiting a critical vulnerability that JetBrains has issued a patch for, which – of course – hasn’t been universally applied. More than 100 devices running JetBrains have been compromised, although so far the attackers haven’t launched any supply chain attacks. And just for good measure, it seems the North Koreans are attacking the same vulnerability.

In neither of these two incidents did JetBrains do anything wrong, other than perhaps the fact that the founders of the company didn’t carefully choose the country they would be born in. Let that be a lesson to us all!

Quite sad.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

 

Saturday, December 16, 2023

Where’s Henry Ford, now that we need him most?


The OWASP SBOM Forum was formed in early 2022 for the purpose of exploring the reasons why software bills of material are not being used in any significant way among public and private organizations whose primary activity is not software development (they’re being heavily used by software companies, as well as by the software development arms of other companies; moreover, the rate at which they’re being adopted by those organizations is accelerating).

Early on, we identified the two most important reasons for this situation: the naming problem and uncertainty around VEX. More than a year ago, we put out a paper that set out our position on the biggest component (no pun intended) of the naming problem: how to move the NVD beyond the many issues that make it difficult to find a CPE name for a software component listed in an SBOM. We continue to discuss this issue.

This fall, we decided to tackle the issue with VEX:

1.      The concept of VEX (which stands for Vulnerability Exploitability Exchange) was developed by the NTIA Software Component Transparency Initiative in 2020 to address a specific use case. That use case was described this way in the only document on VEX published by the NTIA: “The primary use cases for VEX are to provide users (e.g., operators, developers, and services providers) additional information on whether a product is impacted by a specific vulnerability in an included component and, if affected, whether there are actions recommended to remediate. In many cases, a vulnerability in an upstream component will not be “exploitable” in the final product for various reasons (e.g., the affected code is not loaded by the compiler, or some inline protections exist elsewhere in the software).”

2.      After the SBOM effort moved under CISA in 2022, many more use cases for VEX were entertained by the VEX working group. In fact, the only definition published by the CISA VEX group (in this document on page 2) is “A VEX document is a binding of product information, vulnerability information, and the relevant status details relating them.” In other words, any document that ties vulnerabilities to products in any way is a VEX. Of course, this includes any vulnerability notification ever issued, machine readable or not, as well as any vulnerability notification that will ever be issued.

Leaving aside the question of what value there is in having such a broad definition, what we noticed was that this makes it impossible to have a consumer tool that can even parse a VEX document, let alone utilize the information from the document to advance an organization’s software vulnerability management efforts. The developer would need to address a huge (approaching astronomical) number of use cases, variations on those use cases, and combinations of all the use cases and variations; no developer is interested in devoting the rest of their life to developing a single product, no matter how needed. To quote Frederick the Great, “He who protects everything protects nothing.”

Once you state the problem this way, the solution becomes clear: Focus on just one use case and develop a specification that can be used to design proof of concept tooling for both production and consumption of documents based on that use case. Then build and test the tools, to make sure the production tooling can build documents that will always be readable by the consumption tooling, and that the consumption tooling will be able to read any document produced by the production tooling.

Even though there are a few tools that purport to produce or consume “VEX documents” today, there are certainly none that meet the above test. Once the initial use case has been addressed in this way, it will be possible to repeat that effort for other use cases. This follows the sage advice embodied in the answer to the question of how to eat an elephant: One bite at a time.

For the first VEX use case we’ll address, we chose the one we think is by far the most important: notifying software users that certain component vulnerabilities they may have discovered by running the supplier’s most recent SBOM through a tool like Dependency Track or Daggerboard are not exploitable in the product, even though they’re exploitable in the component considered as a standalone product. This means they pose no danger to the user organization. The fact that there is currently no completely automated way to provide this information to users is inhibiting many suppliers from producing SBOMs at all.

If it were sufficient just to put the VEX information in a PDF and email it to their customers, there would be no need for a supplier to provide the information in a machine readable document. However, given that trying to manage component vulnerabilities “by hand” – e.g., by using spreadsheets – would quickly turn into a nightmare, and that SBOMs themselves are machine readable, the VEX must be machine readable as well.

However, that’s hardly all that’s required. The information from the VEX document must be utilized as part of an automated toolchain that a) starts with ingesting an SBOM and looking up component vulnerabilities like Dependency Track does, b) continuously updates the exploitability status of the component vulnerabilities as new VEX documents are received, and c) on request from the user, provides a recently (at least daily) updated list of the exploitable component vulnerabilities in a format needed to ingest the list into a vulnerability or patch management tool. 

Currently no such tools or toolchains exist, and it will be years before they’ll exist in a form that can be utilized by an end user organization. This is probably due to the need to work around the naming problem and some other issues (suppliers producing SBOMs work around these issues every day. They have to). However, we’re sure that third party service providers will become interested in operating one of these tools as a service to end users, who will just download the data provided in step c) above and not have to worry about steps a) and b) at all.

These service providers will need to benefit from full automation of the above three steps; otherwise, they might not be profitable. This is why having a standard, constrained VEX format is so important; only if VEXes are produced in that exact format, and the consumption tool can be built to expect that format and no other, will it be possible to have true automation.

But standardizing the format alone isn’t enough. The processes followed by the supplier need to be strictly constrained as well. For example, many or most of the components found in an SBOM produced by fully automatic means will probably not have a CPE or purl identifier, yet at the same time, no major vulnerability database is based on any other type of software identifier. This means the supplier should include a CPE or purl for every component listed in the SBOM.

However, that is easier said than done, since in many cases it will not be possible to locate a CPE for a proprietary component[i]. What should the supplier do when they can’t find a CPE for a proprietary component? Should they leave the component out of the SBOM altogether? Should they leave it in, but enter “not available” as the identifier? Should they simply enter whatever identifier they have for the component, in the hope that a user who’s really concerned about looking up vulnerabilities for every component will at least have something to search on?

All three of those answers would probably be acceptable, at least for certain users. There are other questions like this as well. However, we need to answer each of those questions in some way, if we want to make production and consumption of VEX documents a completely automated process.

This is where Henry Ford comes in. In 1908, after having produced fancier (and non-standardized) automobiles for five years, he dropped all previous models and introduced the Model T, a “simple, inexpensive car most people could own.” Initially, the Model T was available in multiple colors. However, in 1914 he introduced a huge gain in efficiency by going to a moving production line. One tradeoff for that was he needed to limit the color options to one (to avoid having to clean the line between producing models with different colors).

In his autobiography published in 1922, he quoted himself as having said in an internal meeting, “Any customer can have a car painted any color that he wants so long as it is black.” There’s no way to prove he actually said that, but the lesson is clear: Automating a process often requires making a seemingly arbitrary choice among equally valid options. You need to stop deliberating and simply Choose One.

That’s how our group can achieve its goal of designing tooling for a completely automated process of delivering and utilizing both SBOMs and VEX documents. While there are many options available for both types of documents (in terms of use cases and variants on those use cases), we need to Choose One and move forward.

It's time to bring SBOM production and usage into the Model T era!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.


[i] Because purl doesn’t require a central database, the supplier should usually be able to create a purl for an open source component.

Thursday, December 14, 2023

NERC takes the first step toward the cloud


Yesterday, the NERC Standards Committee approved a Standards Authorization Request which is meant to lead to a complete “normalization” of cloud use by entities subject to the NERC CIP standards. Even though BES Cyber System Information (BCSI) in the cloud will be “legal” on January 1, 2024, deploying medium or high impact BES Cyber Systems or Electronic Access Control or Monitoring Systems (EACMS) in the cloud is currently all but impossible, if an entity wants to maintain compliance with all the NERC CIP requirements. The new SAR is intended to lead to a new standard or standards (although more than new standards may be required), which will remove these final barriers.

You can find the SAR on pages 56-61 of this agenda package from the meeting. The SAR doesn’t seem to have changed much from when I reviewed it in this post, so I won’t go over the same ground again. It now looks like the earliest the complete fix for the cloud will be in place will be early 2028 and more likely after that; if you think it should come much more quickly, you should make your opinion known through your company, your Region, etc.

I certainly think these changes are long overdue, and the idea that it will take four or more years to implement them – God willing, and the creek don’t rise – is to me quite disappointing.

But much more disappointing is that a current problem that everyone thought would be solved on January 1 is in fact not solved at all. I’m referring to the fact that use of SaaS (software as a service), which is now officially illegal for entities with medium and or high impact BES Cyber Systems, but was supposed to be “legal” when BCSI in the cloud becomes allowed on January 1, is now as far away as ever from being approved (I explained the reasons for that sad situation in this post).

I had hoped that the new SAR would include a directive for the standards drafting team to take up this new problem as well, hopefully before their other business. However, the new SAR is silent on the problem. This means there will need to be a new SAR and a new SDT dedicated to just this problem. On face value, it seems to me that it should be able to be fixed with just a few word tweaks, but I’m told that it’s much harder than that. Regardless, it needs to get done, starting with a new SAR.

Since NERC has faced other situations in which it had to deliver a narrowly conscribed solution to a difficult CIP problem very quickly, I don’t think it's expecting too much to suggest the revised standard (and really just one requirement) be delivered within one year. At least one new standard was put in place in only three months, although since that standard had to be redrafted soon afterwards, I don’t recommend pushing it that fast. But 6-9 months to have the new standard approved by FERC and ready to implement (no implementation period will be needed) doesn’t strike me as very hard at all.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

 

Monday, December 11, 2023

Few intelligent device manufacturers are reporting vulnerabilities. At all. Of course, that’s one way to have a clean record…



On behalf of my French client Red Alert Labs, I have been following developments in the US IoT device cybersecurity labeling program. This program was ordered (in very general terms) by Executive Order 14028 of May 2021. As required by the EO, NIST developed a good set of guidelines for IoT device security in 2022: NISTIR 8425. In July, the White House announced that the Federal Communications Commission (FCC) will oversee this program and that it will be based on NISTIR 8425. It will come into effect at the end of 2024, if then.

The FCC issued a Notice of Proposed Rulemaking (NPRM) for the program in August 2023. This was a preliminary document. There have been two rounds of comments on it, and the FCC will probably re-issue the NPRM, with changes based on those comments, in early 2024. The program will provide a certification (label) for consumer IoT devices, which will be based on an assessment conducted by an approved “CyberLAB”. While this is a voluntary program and therefore isn’t a “regulation”, the FCC makes clear that they expect the certification to be considered a valuable aid to marketing an IoT product, in that it will allow a consumer, who may not be a world-class cybersecurity expert, to trust a device they are considering purchasing. In many aspects, this program is modeled on the Department of Energy’s successful EnergyStar program (which is still ongoing).

Not too surprisingly, one of the main concerns listed in the NPRM is vulnerabilities that are identified in an IoT device once it is in use. The NPRM wonders what measures the FCC should take to ensure those vulnerabilities (which are inevitable of course, no matter how diligent the manufacturer was in patching vulnerabilities before the device shipped) are dealt with in a timely fashion.

Of course, this is a legitimate concern. However, in the past six months or so, I’ve come to realize that getting a device manufacturer to patch vulnerabilities in their products isn’t the main problem (although the timeliness – or lack thereof - with which those patches are applied is a big problem. I will deal with that in another post soon).

The main problem is that device manufacturers (including manufacturers of consumer IoT devices, Industrial IoT devices, medical devices, etc.) aren’t doing a good job of reporting vulnerabilities in the first place. Since the great majority of vulnerabilities (or at least CVEs) are reported by the supplier of the software or the device that’s affected by the vulnerability, this means that, if the supplier isn’t reporting any vulnerabilities, the device will usually appear to be vulnerability-free in the National Vulnerability Database (NVD). That is, a search for the device will yield the message “There are 0 matching records”. This could either mean that a) the device has never had a vulnerability, or b) it’s loaded with vulnerabilities, but they’ve never been reported. You take your choice.

Unfortunately, the latter interpretation is more frequently correct than the former. In fact, Tom Pace of NetRise reported in a presentation described in this post that he examined one device that had no reported vulnerabilities (the manufacturer had never reported a vulnerability for any of their devices). However, he found that it had over 40,000 vulnerabilities, not 0. Might this have been a rounding error?... I didn’t think so, either.

From what I can see, very few devices - even medical devices - have any reported vulnerabilities in the NVD (although I would like to hear from anyone who has looked at this question). Even worse, since device manufacturers usually don’t issue patches between full device updates, and it seems even some major medical device manufacturers only update the software and firmware in their devices once a year, this means that many if not most vulnerabilities will remain in the device for months before they’re fixed. Meanwhile, the users won’t be notified of the vulnerability, so they can’t at least put in place an alternate mitigation. Needless to say, this isn’t a good situation.

While I mostly blame the manufacturers for this situation, I will say that I’ve never seen any guidelines (from any organization, public or private) on how manufacturers should report device vulnerabilities. I have always assumed, and I believe it’s also been the assumption of people involved with the NTIA and CISA SBOM efforts, that the developers of each software or firmware product (component) installed in a device will report vulnerabilities for their product.

This means that a user who knows what software and firmware components are installed in a device they utilize should be able to find each component in the NVD and learn about any vulnerabilities in the component using a tool like Dependency Track or Daggerboard. The assumption at that point is that the device will be affected by all the component vulnerabilities, minus any vulnerabilities for which the status is later described as “not affected” in a VEX document.

But there are several problems with this assumption:

1.      Few if any intelligent device manufacturers (including medical device manufacturers or MDMs) now regularly provide SBOMs listing the components in the device to their customers. MDMs are required to provide an SBOM with the pre-market information package they must provide to the FDA, but that will never be made available outside of the FDA. The MDM faces no requirement to provide SBOMs to the hospitals that buy their devices (although this is a recommended practice). Of course, without an SBOM, a device user can’t learn what software and firmware components are present in their device. That’s why the FDA announced six or seven years ago that they were going to start requiring SBOMs for devices (although they didn’t list a date).

2.      Even if the user has an up to date SBOM for their device, they won’t find any vulnerabilities for the components in the NVD, unless the suppliers of the components regularly report their vulnerabilities to the NVD. For most software and firmware components, this is unlikely to be the case, especially considering that about 90% of software components are open source and many open source projects don’t report vulnerabilities at all.

3.      Even if the user had an SBOM and was able to find component vulnerabilities in the NVD, they still wouldn’t know which of these vulnerabilities are exploitable in the device without regular VEX documents. And I know of no device manufacturer that has issued VEXes in anything but a test case.

Does solving these three problems require that end users hound device manufacturers mercilessly, demanding that the manufacturers a) distribute SBOMs regularly to their device customers; b) turn around and hound their component suppliers to report all vulnerabilities to CVE.org (which then furnishes those reports to the NVD); and finally c) produce a new or updated VEX document as soon as they learn that any component vulnerability is not exploitable in their product?

While in theory this might be the best long-term solution, the fact is that the manufacturers have been regularly requested to do these things for at least a few years. Even if they suddenly started doing these things tomorrow, the users wouldn’t be able to use the information to learn about risks they face, since there are no complete consumer component vulnerability management tools that do this (and if you wonder what a complete consumer component vulnerability management tool does, see this document-in-progress from the OWASP SBOM Forum).

Fortunately, there is a way to avoid all these problems: The device manufacturer should take responsibility for performing these steps itself and report all exploitable vulnerabilities in the device to CVE.org. Specifically, the supplier needs to:

1.      Create an SBOM for their device, which lists every software and firmware product installed in the device. The SBOM needs to be updated whenever there is any change in the software, including major and minor version releases as well as patched versions; for the moment, the manufacturer doesn’t need to release these SBOMs to anyone outside of their organization.

2.      Using an appropriate tool, look up each component in the NVD and record any vulnerabilities listed for the component.

3.      If there are no vulnerabilities listed for a component, contact the supplier of the component and ask if they report vulnerabilities at all now; if not, strongly urge them to do so.

4.      Produce a list of all the component vulnerabilities in the current version of the product. These are the “third party” vulnerabilities.

5.      Identify any vulnerabilities (through scanning or otherwise) in the software that the manufacturer itself has written and installed in the device. These are the “first party” vulnerabilities.

6.      Develop a list that includes all the third party and first party vulnerabilities in the current version of the device. This is the complete set of device vulnerabilities.

7.      Report all these vulnerabilities to CVE.org (working through a Certified Numbering Authority, which will often be another developer. A complete list of CNAs is available at CVE.org).

8.      Develop and regularly update VEX documents that list the exploitability status of every vulnerability in the current version of the device. Initially, the status of a vulnerability will usually be “under investigation”, but whenever the manufacturer determines that one of the vulnerabilities either affects or doesn’t affect the device itself, they should change the status to “affected” or “not affected” respectively.[i] A new or revised VEX document should be provided to customers whenever the status of one or more vulnerabilities in the current product/version (or in a previous version that is still in use by some customers) has changed.

The end result of the above steps is that the manufacturer will have a continuously maintained list of all exploitable and non-exploitable vulnerabilities found in the current version of one of their devices. Of course, the manufacturer should already be maintaining a list like this for every version of their product that is in use by customers, since there is no excuse for a device manufacturer or a software developer not knowing what exploitable vulnerabilities are currently present in one of their products.

However, beyond tracking component vulnerabilities for their own product security management purposes, the manufacturer supplier should a) report every exploitable device vulnerability to CVE.org and b) make the list of device vulnerabilities and their status designations (i.e. a VEX document) available to their customers, updating the status designations at least once a month.

There is one caveat to this process: The manufacturer should usually not report an exploitable vulnerability to the NVD or to customers until the vulnerability has been patched and a new version number has been assigned to the patched version. For example, if a vulnerability was found in version 2.4.0 of a device and subsequently patched, the patched version might be named 2.4.1. The report to CVE.org, and the list of exploitable product vulnerabilities that is provided to customers, would describe v.2.4.0 as “affected” by the vulnerability but v2.4.1 as “fixed”. Moreover, both the CVE report and the list would include a link to download the patch, so that any user that has not applied it yet can download it.

There is also a shortcut that could drastically reduce the number of vulnerabilities that a device manufacturer needs to report to CVE.org. The manufacturer doesn’t need to report non-exploitable vulnerabilities, since they don’t affect the security of the device. Many software security professionals estimate that over 90% of component vulnerabilities aren’t exploitable in a device, meaning that up to 90% of component vulnerabilities don’t need to be reported.

On the other hand, all vulnerabilities, both exploitable and non-exploitable (including patched or “fixed” vulnerabilities) should be reported to customers. Because the reports to CVE.org will require a lot more work than will the VEX documents for customers, the manufacturer should be able to save a lot of time by taking this “shortcut”.

There is one “hole” in the above set of steps, which I have deliberately left for last: How does the manufacturer report vulnerabilities found in the device as a whole? It’s not hard to report a vulnerability in a single software or firmware product that’s installed in the device, but the device itself isn’t a software or firmware product. Other than the software and firmware, what in the device would be subject to vulnerabilities? The sheet metal or plastic that encloses the software and firmware?

This is where a little imagination is called for. Even though each vulnerability in the device is found in a particular software or firmware product installed in the device, the device itself performs one or more functions which can’t be attributed entirely to one of those software or firmware products. The device vulnerabilities need to be reported as applicable to the device itself, since otherwise users will never be able to learn about them.

And here’s where the big problem comes in: It seems few device manufacturers report vulnerabilities under the name of the device. For example, I talked to a product security manager with one of the largest medical device manufacturers last year, who told me that they had never reported a vulnerability for any of their devices (and I’m sure they have hundreds of families of devices and many thousands of individual device models). What this means is that no hospital or individual (whether a current or a prospective user of a device made by this huge manufacturer) will be able to learn about vulnerabilities in this manufacturer’s devices – since the manufacturer isn’t regularly distributing SBOMs to customers.

Fortunately, there are some device manufacturers that seem to be very concerned about reporting vulnerabilities. For example, below is part of just one of 29 pages obtained by searching the NVD for the Cisco ASA firewall appliance (one of many families of devices that Cisco makes); I apologize for the blurry text. Each of these CPE names refers to a particular Cisco ASA device.

So Cisco seems to have the right idea, when it comes to reporting device vulnerabilities. Now the other device manufacturers need to get the same religion. As the Good Book says, “Go thou and do likewise.”

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here.


[i] There is also a status called “fixed”, meaning the vulnerability was present in a previous version of the device, but has been patched (or otherwise mitigated) in the current version.

Friday, December 8, 2023

500 million!

Note from Tom 1/27/2025: I haven't received an update of these numbers from Steve Springett since I wrote this post in December 2023. Given Dependency Track's growth rate at the time, it's very likely that it's being used 25-30 million times a day to look up an open source dependency in OSS Index. But even 17 million per day is an amazing number.

At the end of January 2023, I was quite pleased when Steve Springett announced at a meeting of the SBOM Forum that Dependency Track, the open source SBOM analysis tool that he pioneered more than ten years ago (when there was almost no discussion of BOMs, except Bills of Material in manufacturing), had reached 300 million monthly uses; that is, DT was being used 10 million times a day to look up vulnerabilities for software components listed in an SBOM.

This showed quite impressive growth, since in April 2022, DT was being used 200 million times a month (itself not a shabby number, of course). BTW, Steve also leads the CycloneDX (CDX) project. CDX gets heavy usage, but since that doesn’t get tracked like DT usage, I don’t think Steve has that estimate. I do know that he says over 100,000 organizations use CDX.

In today’s OWASP SBOM Forum meeting (we added a prefix to our name recently!), Steve mentioned Dependency Track in a different context, and I remembered that I hadn’t had an update on DT usage since January – so I asked him what it was. He obviously hadn’t thought about it too much, but then he remembered that usage is now around 500 million a month (i.e., almost 17 million lookups a day); he wasn’t even quite sure how much of an increase that was (I, on the other hand, would have been shouting it from the virtual rooftops).

Note: That’s 66% growth in 10 months. The growth rate from April 2022 through the end of January 2023, a total of 9 months, was 50%. So not only is DT growing rapidly, but that growth is accelerating. As you probably know, it’s rare for any process to accelerate as it matures. The only other such process I know of is the expansion of the universe, which cosmologists have been baffled to report is now expanding at an accelerating rate. That will ultimately result in the entire universe going dark in about 100 trillion years. At least when that happens, global warming will no longer be a concern.

Steve then mentioned that private organizations are putting Dependency Track on steroids, so that one instance of the software will be able to perform hundreds of thousands, and ultimately millions, of lookups a day (I may not have remembered the exact numbers Steve used). When that happens, DT will perform billions of lookups a month, not millions.

But Steve also mentioned something else, which he’s said all along: Almost all the usage of DT is by software developers trying to learn about vulnerabilities affecting a product they’re developing. Very little of this impressive usage is by organizations whose primary business isn’t software development – you know, insurance companies, fast food chains, government agencies, churches, auto manufacturers, etc.

Of course, if developers are paying much more attention to fixing the vulnerabilities in their products than before (which they obviously are), that’s a good thing and it still benefits all their users. But SBOMs have been sold all along (including by me, of course) as a solution that any organization will be able to benefit from. That simply ain’t happening to any significant degree. It’s like someone set out to walk from Manhattan to LA, and one day they proudly announced that they’d reached Hoboken, NJ (just across the Hudson River from Manhattan). Sure, that’s progress…but there’s still a very long way to go.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to learn more about what that group does or contribute to our group, please go here, or email me with any questions.