Monday, January 29, 2024

NERC CIP: Time to make your voice heard!


Since last summer, I have been participating in a small, informal discussion group called the Cloud Technical Advisory Group (CTAG). It is composed of NERC and regional staff members (including a few current or past CIP auditors), staff members from NERC entities, representatives of two major cloud service providers, a few consultants like me, and one longtime staff member (and NERC CIP expert) from a four-letter federal commission.

As its name implies, the group’s “charter” is to discuss the problems that are preventing NERC entities with high and/or medium impact BES Cyber Systems (BCS) from fully utilizing the cloud, and to do our best to move the ball forward to finally address these problems. One positive step was the approval in December by the NERC Standards Committee, of a Standards Authorization Request (SAR) intended to lead to a complete “normalization” of cloud use by entities subject to the NERC CIP Reliability Standards.

I’ll be honest: This issue has been around since the cloud first became important, but it is only within the last 3-4 years that it’s received wide attention in the NERC community. I believe this was probably because not being able to make full use of the cloud was initially seen as primarily a missed opportunity to save time and money: “Gee, wouldn’t it be great if we could move all these systems to the cloud and not have to install and maintain them ourselves?” However, it seems that both NERC ERO staff and NERC entities were reluctant even to think too hard about the big changes to the CIP standards, and perhaps the NERC Rules of Procedure, that might be required for this to happen.

One important example of this, which has been discussed for years, is the fact that NERC entities with high and medium impact BCS are not currently able to utilize cloud-based network security services - i.e., services that operate a big SOC that monitors the entity’s networks and internet activities. These services become more and more valuable as they grow their customer base over time, since they can “see” so much traffic worldwide that is not visible through individual networks; thus, they can identify new threats much more quickly.

However, a NERC entity with medium and high BCS can’t utilize these services now, because by doing so, the cloud-based server would then become an Electronic Access Control or Monitoring System (EACMS). This means – among other things – that the server would have to be enclosed in a Physical Security Perimeter (PSP) operated by the entity.

That would have huge consequences; for example, the cloud service provider (CSP) would probably have to install card readers at all entrances to any one of their data centers that held any part of a BCS owned by the entity. All employees would have to badge in and out to that card reader, and they would have to do the same to separate card readers for every other NERC entity with medium or high impact BCS housed, in whole or in part, at any of those data centers. You get the idea: this is impossible for any CSP (and I could go on and on about the other impossible things the CSP would have to do).

One longtime NERC CIP auditor, now retired, told me about six years ago that an entity with high impact BCS in his Region had started using the services of one of the original cloud-based security monitoring services (which is one of the leaders in that field today) to monitor its networks, including its Electronic Security Perimeters (ESPs).

The auditor had to tell the entity to rip out everything they had put in place to use that service and instead install EACMS to do network access monitoring locally, in a PSP the entity could control. He said it “broke his heart” to have to do that, since he knew the entity’s level of security would decline because of this – and they would have to spend a lot of time maintaining on-premises devices that wouldn’t be needed if they could use the monitoring service. Of course, to this day, the entity is still using the on-premises “solution”.

I must admit that six years ago, I wasn’t particularly bothered by what the auditor told me, since I knew the changes to the CIP standards that would be required to allow this entity (and all similar NERC entities) to fully use the cloud were simply out of the question. I blamed the entity for their problems, since they should have known better than even to try such an outrageous stunt.

However, in the last year or so, the discussion has changed. Now, it’s much less about missed opportunities to save time and money and more about actual damage being done both to operations and to security of NERC entities with medium and high impact CIP environments. And now it isn’t just one or two entities that are complaining about this; more and more are complaining all the time.

But there’s an even more serious consequence of this problem, beyond diminished security. The big problem now is that NERC entities are hearing more and more from their software suppliers (including software for real-time operations in medium and high impact CIP environments) that they are moving to the cloud (i.e., becoming SaaS). The supplier might commit to continued support for their on-premises version for a few more years (and not always even that), but they usually make clear that their development dollars are going to the cloud. From now on, if the NERC entity wants to have all the new bells and whistles, they will have to use the cloud version.

When I joined the CTAG last summer, this problem was growing, with of course no end in sight. But even given that, there still wasn’t a sense that this was now not just a nice thing for the to-do list, but an urgent problem that needed to be solved soon. That is, there wasn’t until…last week’s meeting.

At that meeting, it was clear that the complaints about the cloud issue are now pouring, not just trickling in. However, since the SAR was approved in December, that means the clock is now ticking for a solution to the cloud problems to be in place. Of course, BCSI in the cloud was always one part of that solution, and it became reality on January 1. Unfortunately, the changes required for BCS, EACMS, and PACS (Physical Access Control Systems) to be in the cloud require much more thoroughgoing changes to the CIP standards, and perhaps even to the NERC Rules of Procedure (which I don’t believe has been the case for any previous change to the CIP standards) than did the BCSI changes.

So, at last week’s CTAG meeting, we looked at the question of how much time would be required between today and when a full solution to the cloud problem would be drafted, approved by NERC and FERC, and ready to be implemented by NERC entities. Here is my timeline:

1.      When the Standards Committee approved the SAR, they assigned it medium priority. They did that because there are over 20 other standards development projects (across all the NERC standards, not just the CIP standards) already in progress. Therefore, nothing at all will happen to the project before this July.

2.      In July, there will likely be a call for drafting team members. The first task of this team will be to put the SAR, as approved by the Standards Committee in December, up for comment. The team will use these comments to revise the SAR, and submit that to the Standards Committee for a new approval.

3.      When that approval is received, the drafting team will begin to work on drafting new standards required to allow full use of the cloud by NERC entities with medium and/or high impact BCS. Of course, they will then have to approve multiple successive drafts for comments (to which the drafting team will need to respond). Most previous changes to CIP standards have gone through at least four draft/comments/ballot cycles, before the new or revised standards are finally approved.

4.      The next step is approval by FERC, which can take over a year by itself. Given the major changes that will be required for this project, that is likely to be the case.

5.      How long will that process take? The CIP version 5 standards were a fundamental rewrite of all of CIP; this was the only previous set of changes to the CIP standards that is at all comparable to the huge changes that will be required to enable “Cloud CIP” (my term).

6.      The CSO 706 SDT, which drafted and passed CIP v5, had previously drafted (and passed) CIP versions 2, 3 and 4. They started work on CIP v5 in January 2011; FERC approved v5 in November 2013, close to three years later. However, CIP v5 included the “bright line criteria” for identifying BCS. These criteria were originally developed for CIP version 4 during 2010 (v4 was approved by NERC and FERC, but was never implemented. Long story). That effort took at least six months, so let’s say the whole drafting and approval process for CIP v5 took 3 ½ years. Starting from this July, adding 3 ½ years means that FERC approval of Cloud CIP

7.      Given that “Cloud CIP” will constitute an equally fundamental change in the CIP standards as CIP v5 did, it’s safe to say that 3 ½ years is a good estimate for the time the process will take, starting with seating of an SDT in early 2025. Thus, we can expect FERC approval of Cloud CIP by the end of 2027.

8.      After FERC approves the new standard(s), there will be an implementation period of probably one to two years. But, given that there are many NERC entities that want to be able to use the cloud as soon as possible, there will undoubtedly be some provision for them to start complying with the new standards earlier than that. So, let’s say that, six months after FERC approves Cloud CIP, NERC entities will be able to implement BCS and EACMS in the cloud. That means NERC entities will be able to start taking advantage of Cloud CIP by the middle of 2028.

However, there’s an elephant in this room. Since it's very possible that changes will need to be made to the NERC Rules of Procedure (RoP), we have to allow at least a year for that, since there will inevitably be a lot that needs to be discussed before the changes can happen. It would be nice to think that the RoP changes could be drafted and approved in parallel with the new standards, but it’s hard to see how the RoP changes could even be drafted until the new standards are finalized.

So, if we get lucky and there are no major glitches along this path, you can expect to be “allowed” to deploy medium and high impact BCS, EACMS and PACS in the cloud by the end of 2029. Mark your calendar!

Perhaps you may think that six years is a little long to have to wait for the cloud to be completely “legal” for NERC entities. I can assure you that the CTAG members thought the same thing last week.  How this will be fixed needs to be decided. But there should be no doubt about the need to accomplish this in some way. It’s time to make your opinion known!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

Sunday, January 21, 2024

Once again, an ad hoc fix to a serious problem with a NERC CIP requirement

On New Year’ s Day, I wrote a post that described what could be a showstopper problem with the wording in the new CIP-004-7 R6, which took effect that day. This new requirement was the most important part of the changes that took effect that day. The changes were nominally made to open the way for BES Cyber System Information (BCSI) to be “stored” in the cloud, but in fact just storing BCSI in the cloud was only a small part of the reason why the changes were needed.

The main reason why these changes were needed was that the fact that BCSI couldn’t be stored in the cloud meant that use of SaaS (software as a service) was effectively prohibited in OT environments with medium and/or high impact BES Cyber Systems – since a SaaS app would never be able to utilize data that meets the BCSI definition.

Not being able to use SaaS would have been a huge disappointment for NERC entities with medium and/or high impact BES Cyber Systems. In fact, it would be almost as bad as the disappointment of not being able to use EACMS (Electronic Access Control or Monitoring Systems) in the cloud, which has been the case for many years. That problem effectively prevents the use of most outsourced security monitoring services in medium and high impact CIP environments. Unfortunately, fixing the EACMS problem will require almost as much change as will fixing the problem with BES Cyber Systems in the cloud, so that fix is still at least 2-4 years off, absent divine intervention.

When I wrote my New Year’s Day post, I thought the only way out of the SaaS problem would be for CIP-004-7 R6 to be revised, but I didn’t see any movement to do that. However, it turns out that help has arrived from a source I didn’t expect: a paper called “Usage of Cloud Solutions for BES Cyber System Information”.

The details of this are, as usual with questions regarding NERC CIP compliance, too complex to lay out here. However, suffice it to say that this paper was prepared by a committee of NERC entities (the “RSTC”) convened by NERC; they developed the paper to be guidelines for themselves and other NERC members on how to use BCSI in the cloud.

Besides “guidelines”, NERC also recognizes another type of document called “implementation guidance”; this provides guidance on implementation of a standard or standards. However, it is not an Interpretation of a standard. Developing an Interpretation of a NERC standard requires going through a process very similar to the process of developing or amending a CIP standard: constituting a drafting team to draft the Interpretation, having multiple NERC ballots with comment periods until the draft Interpretation meets the stringent criteria for approval, getting approval by the NERC Board of Trustees, and finally getting approval by FERC. This is easily a 1-2 year process and isn’t for the faint hearted. Moreover, in one notorious case more than ten years ago, two CIP interpretations went through this whole process, then were unexpectedly rejected by FERC in the end. That certainly dampened enthusiasm for Interpretations of CIP, if there was ever “enthusiasm” in the first place.

Is there a difference in form between guidelines and implementation guidance? None at all – what makes a document implementation guidance is that NERC (technically, the “NERC ERO”) has “endorsed” it to be such. However, once NERC has endorsed a document as implementation guidance, NERC auditors are required to “give deference” to it when they are auditing compliance with the standard(s) that the guidance refers to. In other words, this doesn’t make it a binding interpretation of a standard, but it’s about as close as you’ll get to that in the NERC world, unless you’re ready to go through a 1-3 year process to modify the standard or develop an Interpretation and get it approved.

Thus, even though the RSTC developed their document to be guidelines, they really wanted it to be implementation guidance; they then submitted it to NERC for approval as such. When the document was originally published last summer I thought it was good, but that wasn’t a universal opinion in the NERC community. I don’t think many people expected that NERC would endorse it as implementation guidance.

But that is exactly what NERC did last month, and I just learned a couple of weeks ago that the document, now that it is official implementation guidance, may prove to be the knight in shining armor that slays the dragon let loose by the problem with the wording of CIP-004-7 R6. Thus, this wording problem may in fact prove not to be a barrier to SaaS – although NERC entities who are especially cautious might want to wait a month or two before signing any contracts for using SaaS in the cloud. Thus, it looks like the NERC community might have been saved from what would have been a very unfortunate outcome. That’s the good news.

So, what’s the bad news? The bad news is this is hardly the first time something like this has happened. These unexpected wording glitches (which have nothing to do with security concerns and everything to do with Standards Drafting Teams being composed of human beings who aren’t blessed with omniscience - although maybe that should be a requirement for SDT membership from now on!) have occurred many times in the past.

I lived through, and wrote about, a number of these controversies regarding wording during the period between when FERC announced they would approve CIP version 5 in April 2013 and July 1, 2016, when the CIP version 5 standards came into effect. CIP v5 was a fundamental rewriting of CIP and is essentially what we’re living with today, along with the standards that came into effect later (CIP-012 through CIP-014). As such, it introduced a whole set of new concepts, like Cyber Asset, BES Cyber Asset, BES Cyber System, EACMS, external routable connectivity, Intermediate System, BCSI and more.

Most of those new concepts had their own definition, although some were undefined, which also led to big problems. But the ambiguities that are almost unavoidable when you’re dealing with cybersecurity regulations often had huge consequences that would cost some group of NERC entities a lot of money, even though that result was never intended.

Thus, there were some terrific battles regarding topics like:

·        The meaning of “programmable” in the definition of “Cyber Asset”;

·        The meaning of “external routable connectivity (ERC)”; and

·        The meaning of “serial” (related to the ERC definition).

There were many more of these battles. In fact, there was a meeting between NERC and the trade associations during that period regarding the meaning of “programmable”. I of course couldn’t attend it and I wouldn’t have wanted to. However, from what I heard at the time, tensions were so high during the meeting that it’s a wonder nobody was killed.

My point in bringing all of this up is that these controversies keep appearing, whenever there’s any change to a CIP standard, or introduction of a new standard. What is most interesting, and depressing, is that almost none of these controversies have been resolved in the intended fashion: by drafting a new or revised definition, a new or revised standard, or an Interpretation. This is because nobody wants to wait for a cumbersome, multiyear process to run its course, before there can be a real answer to one of these questions.

Instead, I believe these controversies have literally all been resolved in the same way the problem with CIP-004-7 R6 has been “resolved”: by some process that has no firm basis in the Rules of Procedure or the CIP standards, but is very much ad hoc. In one serious controversy regarding “transfer trip” relays and the interpretation of Criterion 2.5 in CIP-002 Attachment 1, the “resolution” consisted of a NERC official uttering two magic words at a NERC CIP Committee meeting, without even any documentation – other than in my blog – that this had happened. Yet every NERC entity that was struggling with this issue immediately heard about the statement, as did all the auditors. I never heard about this problem again afterwards. It was no longer an issue, even though nothing at all had officially changed.

Folks, we can’t go on like this. We (i.e., the NERC community) need to determine why these completely unnecessary problems with wording in the CIP standards keep recurring, and figure out a better way to deal with them, than either a multi-year process or some sheriff shooting the bad guy down in the middle of the street, without any due process at all.

This is especially important as the community begins the slow (of course!) effort to amend the CIP standards  to allow full use of the cloud by NERC entities with medium or high impact CIP environments. This may require even more radical changes than those introduced by CIP version 5. Moreover, along with changes to the CIP standards, there will likely need to be changes to the Rules of Procedure that will finally put an end to these debilitating glitches (my favorite idea during the CIP v5 Wars was a Supreme Court of CIP, which would settle every issue once and for all. Of course, that’s not realistic, but I’m not sure any other solution is realistic, either).

Making the required changes to the Rules of Procedure will be quite difficult, I agree. But after they’re made, we hopefully won’t need any more knights in shining armor to save us.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

 

Thursday, January 18, 2024

Reporting vulnerabilities for intelligent devices

Note from Tom: This post is a slightly rewritten version of a chapter in my new book, "Introduction to SBOM and VEX".

It is a fact that almost all the documents that have been written about software bills of materials only discuss SBOMs for software, not SBOMs for devices. This is in spite of the fact that the need to have SBOMs for medical devices was one of the main drivers of the NTIA and CISA SBOM initiatives. However, even though the SBOMs produced by intelligent device manufacturers do not differ greatly from those produced by suppliers of user-managed software (i.e., almost everything we normally refer to as “software”), the vulnerability reporting responsibilities, and especially how they are implemented, differ significantly.

Please note that I use “intelligent device” to mean any hardware product that operates under the control of software and/or firmware. As far as this discussion goes, the device could be a baby monitor or an electronic relay controlling high voltage power lines; the characteristics and uses of the device make no difference to this discussion.

It is important to keep in mind the reason why SBOMs are needed by non-developers, whether for devices or for user-managed software: the user organization (or a service provider acting on their behalf) needs to be able to identify exploitable vulnerabilities in third-party components included in the software or device that, if not patched or otherwise remediated, pose a risk to the organization. What is needed for the user organization to achieve this goal, and how can the device manufacturer help their customers achieve it? There are four important types of actions the manufacturer can and should take to help their customers.

Reporting vulnerabilities

Vulnerability management, for both intelligent devices and software, is only possible if there is a way for the user to learn about vulnerabilities found in the device or software product. Of course, vulnerability databases are repositories for this information. But how does the information get into the vulnerability database? In most cases, information about vulnerabilities and what they apply to gets reported to CVE.org, an agency within the US Department of Homeland Security (DHS) that maintains the CVE database. This database was previously known as “MITRE”, since it is operated by the MITRE Corporation. MITRE contractors still maintain the database, but they now operate under the management of the board of the nonprofit CVE.org. This board is comprised of representatives of government agencies like NIST and CISA, as well as MITRE and other private-sector organizations.

CVE information is entered into the CVE.org database by means of “CVE reports”, which describe a vulnerability and the product or products it applies to. The reports are prepared by “CVE Numbering Authorities” (CNAs) – which are usually software suppliers authorized by CVE.org. The CNAs prepare reports both on their own behalf (i.e., describing vulnerabilities they have found in their own products) and on behalf of other suppliers that are not CNAs; as of early 2024, there are over 300 CNAs. Each CNA has a “scope” that indicates the types of suppliers it will assist in preparing CVE reports. A scope can comprise a country, an industry, etc.

The important thing to keep in mind is that information on a vulnerability (including the product or products affected by it) almost always gets entered in the CVE.org database by the supplier of the product (often, acting through a CNA). Obviously, this system relies heavily on the good faith of software suppliers. Fortunately, most larger software suppliers take this obligation very seriously, although the record is spotty for the smaller software suppliers.

Are device manufacturers also doing a good job of vulnerability reporting? Unfortunately, they could do a lot better. In fact, many major intelligent device manufacturers seem to have never reported a single vulnerability to CVE.org. This can be verified by entering the name of the manufacturer in the “CPE Search” bar in the National Vulnerability Database. If the search comes up empty (and the user has also tried a few possible variations on the manufacturer’s name), this means the manufacturer has never reported a vulnerability for any of their products (it is possible to say this because a CPE name for a product is only created - by NIST - when a CVE report is filed that lists the product as vulnerable to the CVE in the report).

I recently checked this assertion by searching the NVD for the top ten medical device manufacturers (whose devices are sold in the US). The results were, frankly, appalling:

1.      Four of the manufacturers were not listed in the NVD at all, meaning they had never reported a vulnerability for any of their devices (one of those four is the employer of a friend of mine, who is a product security manager at that manufacturer. He told me last year that they had never reported a vulnerability for any of their devices; the search confirmed his statement. This is one of the top medical device makers worldwide).

2.      Four of the manufacturers had only a small number of vulnerabilities listed for their devices (one of these manufacturers had only reported two vulnerabilities across all their devices).

3.      The remaining two manufacturers are part of very large, diversified companies that have reported many vulnerabilities. Because it would require a huge effort to determine which if any of those vulnerabilities apply to medical devices, I can’t make any statement about them.

What is most disturbing about these results is that medical devices have the most stringent cybersecurity regulations of almost any other devices or software (the regulations are by the US Food and Drug Administration or FDA. Most other countries regulate medical devices as well). Yet, for all practical purposes, it is close to impossible for hospitals (where most medical devices are installed, of course) to learn about vulnerabilities in the devices they use, without investing large amounts of time to do that (see below).

One defense I have heard from device manufacturers, when asked why they are not reporting vulnerabilities for their devices, is to point out (with a straight face, no less) that devices do not have vulnerabilities; rather, the software and firmware products installed in the device have vulnerabilities. Without the software and firmware, the device is simply an empty box made of sheet metal or plastic. It is up to the developers of the software and firmware products installed in the device to report vulnerabilities in their products; it isn’t the device maker’s responsibility.

There are three problems with this argument. First, to learn about vulnerabilities this way, users of the device would need to know all the software and firmware products installed in the device; that is, they would always need an up to date SBOM. Yet, it is doubtful that many device manufacturers are providing a new SBOM to their customers whenever the software in the device is updated (one exception to this statement may be Cisco™).

Second, even if users always had an up to date, detailed SBOM, tracking each software or firmware component in the device would be a big job. Some devices have hundreds or even thousands of software and firmware components installed in them. For the device customer to track vulnerabilities in all those components, the supplier of each component would need to report vulnerabilities regularly to CVE.org; that is not likely to be the case.

Third, why should every user of the device be responsible for tracking all vulnerabilities in the device, when the manufacturer should be (and hopefully is) doing that as well? After all, the manufacturer will never know about vulnerabilities in their device unless they do this. If there are 10,000 customers of a device, does it make sense that each of those customers would need to be constantly checking for vulnerabilities in every component, when the manufacturer could simply share the results of the analysis they are already doing?

Clearly, it makes no sense for device manufacturers to require their customers to bear the burden of learning about vulnerabilities for all the software components in the device. The manufacturer needs to monitor those vulnerabilities themselves (which they should be doing anyway), and report each of them to CVE.org, using the CPE name of the device (Cisco does this today for their networking devices). That way, their customers will be able (ideally) to learn about all the vulnerabilities that apply to any of the software or firmware components installed in the device with a single lookup in the NVD.

More specifically, the supplier needs to report all current vulnerabilities in any of the software components in the device to the CPE name for the current version of the device. Because device manufacturers usually update the software in their devices (including application of security patches) in a single periodic update package, each update should be treated just like a new version of a software product. That is, each update should have its own version number, SBOM and VEX documents, and CPE name.

Patching vulnerabilities

Like software suppliers, an intelligent device manufacturer should never report a vulnerability to CVE.org unless they have already made a patch for the vulnerability available to their customers. Is it possible this is the reason why device manufacturers are for the most part never reporting vulnerabilities in their devices – that they never develop patches for vulnerabilities?

It is not true that device makers are not patching vulnerabilities! A device maker’s regular updates to their devices include both functionality updates and security patches. Then, why wouldn’t the device maker also report each vulnerability that has been patched? My friend who works for a large medical device maker told me in 2023 that the reason why they don’t report vulnerabilities is they would never report them before they are patched – but once they have been patched, they are no longer vulnerabilities.

This might make sense, until one considers that software suppliers are often doing a good job of reporting vulnerabilities, yet the fact that a patched vulnerability is no longer a vulnerability is just as true for software as it is for an intelligent device. Why are software suppliers reporting vulnerabilities, while the device makers are mostly not reporting them?

I believe this anomaly is due to how patching happens in software vs. in a device. Usually, when a software supplier develops a security patch for one of their products, they notify their customers of it right away, at the same time as they notify them of the vulnerability (they will presumably also notify CVE.org of the vulnerability at the same time and provide the information about the patch or other mitigation).

The software customer then needs to make the decision to apply the patch. Sometimes, they may not apply it right away, due to some concern about impacting performance (for example, operators of power plants often hold all patches until they are able to schedule an outage for the plant. They are concerned that an unexpected “side effect" of applying the patch might cause a serious problem if the plant were running when it is applied – e.g., their $100,000,000 combustion turbine might start to vibrate uncontrollably and tear itself apart). But, since the software customer does have the option not to apply the patch, it is even more important for the supplier to provide them the information on the vulnerability, so they can decide whether the vulnerability poses enough of a risk that the need to apply it outweighs other concerns.

However, device makers often give their customers little or no choice as to whether or not to apply a patch: since the patch comes as part of an update that fixes other vulnerabilities and at the same time increases the functionality of the device, the customer will often not want to block the update from being applied; in fact, for many household devices like baby monitors and smart thermostats, the manufacturer remotely installs the update without even notifying the user that this is happening.

So, an important difference between a patch for a software product and a patch for an intelligent device is that application of the patch is almost completely under the customer's control in the case of a software product. However, that is much less likely to be true in the case of an intelligent device. For the moment, let’s assume the device customer has no control at all over whether the patch is applied – that all devices are like baby monitors, where each update is installed without any pre-notification of the customer.

If we make this assumption, then it might make sense for the device maker never to report a vulnerability to CVE.org, or even to their customers. After all, since the update has just been applied to all customer devices without exception, the vulnerability has ceased to exist anywhere in their customer base. There is therefore no reason to report the vulnerability now.

But there are two problems with this argument. The first is that, since device makers often give customers the option not to apply an update immediately – and this is especially true with more critical devices like medical or industrial devices – many of them will not apply it right away; moreover, they may decide to hold off indefinitely. Plus, if the update was pushed out, but the customer’s device was offline at the time, it also will not have been applied.

If the device maker does not notify both their customers and CVE.org of the vulnerability once the update has been applied, all the devices that didn’t receive the update will be vulnerable. Customers need to know about the vulnerability, so they can decide whether to apply some alternative mitigation – like removing the device from their network. Only the customer can decide whether an alternative mitigation is needed, but they won’t even be able to make that decision if they don’t even know about the vulnerability. Just as bad, new customers may buy the device without knowing about the vulnerability, because an NVD search on the device will not find it (and as stated earlier, it is likely that the search won’t find any vulnerabilities at all, assuming the manufacturer doesn’t report vulnerabilities for the device).

This situation is made more poignant by another fact that I learned in my discussion with the large medical device manufacturer in 2023: that manufacturer only performs a full device update once a year. This means that in some cases, a serious vulnerability might go unpatched in customer devices for hundreds of days, without the customers even being informed that the vulnerability is present in the device they use. Again, if customers knew of the vulnerability, they might apply alternative mitigation like removing the device from their network.

Contrast this with what happens when a software supplier learns of a serious vulnerability in one of their products. In most cases, the supplier will not immediately inform their customers of the vulnerability. However, they will start to work on a fix for the vulnerability as soon as they learn of it. As soon as the fix is available, they will post it for customers to download (or in some cases, the supplier will advise customers to upgrade to a patched version of the software); at the same time, they will report the vulnerability in their software to both their customers and to CVE.org.

My opinion is that device manufacturers should move to a model like that of the software developers. Here are some suggested best practices:

1.      Upon learning of a serious vulnerability in one of their devices, they should immediately develop a patch. As soon as the patch is ready, they should make it available to their customers, along with reporting the vulnerability to their customers and to CVE.org.

2.      If a manufacturer does not do full device updates at least every quarter, they should schedule regular “security updates” at least once a quarter, in which all outstanding security patches are applied. They then need to decide with their customers on criteria for when a patch needs to be made available immediately (as in the case of log4shell), vs. waiting for the next security update.

3.      If a new vulnerability does not meet the criteria for making the patch available immediately, the manufacturer should not report the vulnerability to CVE.org or to the customers of the affected device. Instead, they should schedule the patch for the next security update. With the update, they should notify both the customers of the product and CVE.org of the vulnerability.

What have we learned?

There are three main problems we have discussed regarding vulnerability management practices of intelligent device manufacturers:

1.      Device manufacturers are in many or most cases not reporting vulnerabilities in their products to CVE.org. While many software suppliers are also not reporting vulnerabilities, this appears to be a much bigger problem for device manufacturers.

2.      Except in the case of very serious vulnerabilities like log4shell, most device manufacturers do not immediately release a patch for a new vulnerability in one of their devices. Instead, they do not notify of the vulnerability and include application of the patch in the next full update, which might be up to one year later. 

3.      With the full update, they likely will include notification of the vulnerability in the patch notes. However, they seldom also report the vulnerability to CVE.org.

I recommend these practices to address these problems:

A.     Device manufacturers should put in place procedures to report future vulnerabilities to CVE.org (including making sure they understand how the process works, contacting a CNA who can help them when they need to report, etc.).

B.     Device manufacturers that do not provide at least quarterly full updates to their products should schedule at least quarterly security updates, during which all outstanding patches will be applied to a device.

C.      Each manufacturer needs to consult with its customers to determine appropriate criteria for deciding whether a newly discovered vulnerability in a device should be patched immediately, or whether it can wait for the next security update.

D.     When a patch for a vulnerability is made available to customers in a security update, the manufacturer should also report the vulnerability to CVE.org.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

 

Tuesday, January 16, 2024

My first look at the Cyber Resilience Act

The EU Cyber Resilience Act was passed about two years ago; there has been a lot of talk about what is or isn’t in it. However, I’ve found that talk frustrating, because the Act itself didn’t have much in the way of details, so the question of what was “in” it was close to meaningless. I understand this is a feature of some EU legislation: An initial act just establishes the agreed-upon purpose to be achieved. The details come in a later “Implementing Act”.

This wasn’t just frustrating for me, but also for the several ENISA (the EU’s CISA) staff members who are part of the OWASP SBOM Forum and discussed the CRA at a couple of our meetings last fall. It seemed they were always sure the details were just a month away - but they admitted they’d been saying that for two years. I was thinking that when the details finally arrived, they would just reflect the text of the great play “Waiting for Godot” (written by an Irishman, Samuel Beckett, originally in French) – a play that starts and ends with two men waiting for Godot in a barren landscape. They’re assured every evening that Godot will be there without fail the next day, but of course he never shows - and they even admit he will never show. They continue to wait nevertheless.

Fortunately, in the case of the CRA it seems Godot has finally shown up. After being told a couple of weeks ago that the document with the details had finally been released (only to hear next that nobody knew where it was), I was quite happy to hear from Jean-Baptiste Maillet on LinkedIn (who was commenting on my most recent post) that the full text of the CRA is now available here. Jan. 17: Christian Tracci pointed out to me on LinkedIn that this text has just been approved by the Council, but still needs to be approved by the European Parliament. However, it is unlikely that it will change much (if any) before that happens.

The document is a brief 189 pages. It looks like it has a lot of good things in it, which I will read later, but what I was most interested in – and I imagine a lot of readers will be most interested in – was Annex 1 on page 164, titled “Essential Cybersecurity Requirements”. It’s literally only three pages (actually, even less than that), but is obviously very well thought out.

If you’re a fan of prescriptive requirements, with specific “Thou shalt” and “Thou shalt not” statements, well…perhaps you need to look elsewhere than the CRA to get satisfaction. The requirements in Annex 1 are objectives-based (which I consider to be the equivalent of risk-based). That is, they require a product to achieve a particular objective, but the means by which it is achieved are up to the manufacturer. Here are a few examples (they’re all preceded by “products with digital elements shall:”)

·        “process only data, personal or other, that are adequate, relevant and limited to what is necessary in relation to the intended purpose of the product (‘minimisation of data’).”

·        “provide the possibility for users to securely and easily remove on a permanent basis all data and settings and, where such data can be transferred to other products or systems, ensure this is done in a secure manner.”

The interesting point about the requirements in this section is that they all seem to be focused entirely on the case of intelligent devices; none of them seem to be written for what I call “user-managed” software: i.e., software that gets installed on generic servers, vs. coming pre-installed in a device. And since I’m currently quite focused on the shortcomings of vulnerability and patch management in intelligent devices, here is some of what I found in Annex 1 on those topics (there’s a lot more that can be said about it).

1. Part II of Annex I is titled “Vulnerability handling requirements”. The first of those requirements is “identify and document vulnerabilities and components contained in the product, including by drawing up a software bill of materials in a commonly used and machine-readable format covering at the very least the top-level dependencies of the product.”

Note this is a requirement for the device manufacturer to “draw up” an SBOM for the device, but it doesn’t say anything about distributing the SBOM to customers. However, in Annex II, page 169, we find “(As a minimum, the product with digital elements shall be accompanied by)… If the manufacturer decides to make available the software bill of materials to the user, information on where the software bill of materials can be accessed.”

To summarize, the manufacturer is required to have an SBOM for their device for their own use. Sharing it with customers is optional.

Also note that the SBOM is required to document “vulnerabilities and components” contained in the product. While it is certainly possible to document vulnerabilities in an SBOM, it is usually much better to keep a separate inventory of vulnerabilities found in the product. The components in the product (that is, the software and firmware products installed in a device) will not change between versions of the device (which is usually an update to all the software installed in the device). But the vulnerabilities and their exploitability status designations change all the time (e.g., the status of a particular CVE may change from “under investigation” to “not affected” – meaning the supplier has determined that this vulnerability is not exploitable in the product, even though it is found in one of the components of the product). This is why it’s usually better to list them in a separate VEX document. The VEX may have to be refreshed every day.

2. The first part of the second requirement in Part II of Annex I reads, “in relation to the risks posed to the products with digital elements, address and remediate vulnerabilities without delay, including by providing security updates” Given that we’re talking about devices, there are two significant problems with this requirement, although I’m sure they were unintentional.

The first problem regards how the manufacturer’s “conformance” (the EU’s favorite word, it seems) with this requirement will be assessed. If we were talking about a software product, the answer would be easy: The assessor (auditor) would check in the National Vulnerability Database (NVD) for vulnerabilities that are applicable to the software product and determine whether they had been remediated. Ahh, would it were so simple. However, it’s not:

a)      The vulnerabilities found in the NVD (or in any other vulnerability database that is based on vulnerabilities identified with a CVE number) are almost all there because the developer of a software product reported them to CVE.org. A large percentage of software developers (especially the larger ones) do a good job of reporting vulnerabilities that way.

b)     But as I’ve documented recently, and will in more detail in an upcoming post, intelligent device manufacturers are not doing a good job of reporting vulnerabilities in their devices to CVE.org. In fact, it seems a lot of major device manufacturers are not reporting any vulnerabilities in any of their devices (Cisco is one big exception to this statement).

c)      When a device manufacturer doesn’t report vulnerabilities for their products, does this result in a black eye for them? Au contraire! If the manufacturer has never reported a vulnerability for their device, any attempt to look up the device in the National Vulnerability Database (NVD) will receive the message “There are 0 matching records”. This happens to be the same message the user would see if the product they’re looking for is free of vulnerabilities.

d)     In other words, there is no advantage if a device maker reports vulnerabilities in their products, since someone searching for them in the NVD is probably likely to assume that the devices are vulnerability-free when they see the above message; moreover, if they find some vulnerabilities reported for a device, that will often be held against them when comparing similar devices for procurement. An NVD user won’t normally realize that the message really means no vulnerabilities have been reported for the device – perhaps because the manufacturer is not even bothering to look for them.

e)     Of course, a device might well be loaded with vulnerabilities, even if the manufacturer hasn’t reported any. For example, in this post I wrote about a device that probably has over 40,000 open vulnerabilities; yet, the manufacturer has never reported a single vulnerability for that device, or any of the 50 or so other devices it makes.

The second problem with his requirement is the wording regarding patching vulnerabilities “without delay”. I wonder if these words were based on any knowledge of how device manufacturers actually patch vulnerabilities: Except in real emergency situations such as the log4shell vulnerability, device manufacturers hold all patches until the next full device update. For them, “without delay” means “at the next device update”.

I had known this for a while, but since I assumed that device makers normally do a full update either once a month or at worst once a quarter, I thought this was fine. My guess is the risk of leaving most vulnerabilities (that are not being actively exploited) unpatched for three months is fairly low.

However, last year I was surprised – if that’s the right word – to learn from a friend who works in product security for one of the few largest medical device makers in the world, that they just issue one full device update a year. I asked if that meant that non-emergency patches might wait up to 364 days to be applied. He said yes – and remember, this company makes medical devices that can keep people alive, not smart refrigerators or baby monitors.

But the problem is worse than that, because this medical device maker, and presumably most other MDMs (since the company is an industry leader), never reports an unpatched vulnerability, even to their own customers. In a software product, this policy makes sense, since after learning of a serious vulnerability, most software suppliers will pull out all the stops to issue a patch within a few days (or at most a week?).

But if it might literally be hundreds of days before a device manufacturer is able to patch a vulnerability (not that they couldn’t, but because it requires a lot of work and expense on their part), the least they should do is to discreetly let their customers know about the vulnerability and suggest one or two other mitigations, including perhaps removing the device from the network until it can be updated.

However, even this is conceding too much. A device maker should patch any exploitable vulnerability within three months, period – plus, they should always notify their customers of it within a week, so they can at least take whatever mitigation measures they think are necessary.

I hope to put up a post soon that will describe these problems in much more detail, and also propose what I think are sensible policies to address them.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

 

Wednesday, January 10, 2024

Evidently, she still hasn’t gotten the memo


I’ve put up two posts recently on the upcoming Cyber Trust Mark cybersecurity labeling program for IoT devices, that the FCC is developing in response to Executive Order 14028 (the current ETA is the end of 2024).

In the first of those posts, I quoted Anne Neuberger, the deputy national security adviser for cyber and emerging technologies, who said at the July announcement of the program that the labeling program would become a way for “..Americans to confidently identify which internet and Bluetooth-connected devices are cybersecure”. My comment on her comment was

Unfortunately, Ms. Neuberger doesn’t seem to have gotten the memo that said a labeling program needs to provide a positive incentive to follow the path of (cyber) virtue, not require public shaming (perhaps the stocks?) for those who don’t want to obtain the label, whatever their reasons. She also didn’t see the memo that said there’s no way the federal government, the Vatican, the NSC, or any other entity can determine whether a product is “cybersecure”. Every product is cybersecure until it gets hacked, at which point it isn’t. The best that mere mortals can say on this issue is that the device manufacturer has practices in place that should normally be conducive to good security, and we hope they’ll keep them up.

However, I regret to say she still doesn’t seem to have gotten that memo (or read my post, not that I expected her to). Politico’s weekly free cybersecurity newsletter (which is quite good, BTW) quoted her as saying at the Consumer Electronics Show this week (regarding her recent discussions with other countries), “I don’t want to scoop ourselves but it’s so that products that are tested here can also be sold in other parts of the world,” Neuberger said, “which is what private companies are really interested in.”

However, the fact is that, even when the Cyber Trust Mark program comes into effect, products that are manufactured in the US will be able to be sold anywhere in the world that they’re allowed to be sold, whether or not they have the label. The label isn’t even a seal of approval that allows a product to be sold here. The FCC got it right when they said in the Notice of Proposed Rulemaking that they issued after the announcement:

“We anticipate that devices or products bearing the Commission’s cybersecurity label will be valued by consumers, particularly by those who may otherwise have difficulty determining whether a product they are thinking of buying meets basic security standards.” 

Ms. Neuberger, the label doesn’t tell the consumer if a product is “cybersecure”, as if anybody could do that. It does tell the consumer that the manufacturer seems to have designed and built the device while following certain basic cybersecurity practices, which are described in NISTIR 8425.

Also, evaluating a product for the label doesn’t require “testing” of products. The manufacturer will need to provide evidence that they addressed the requirements of NISTIR 8425 like “The configuration of the IoT product is changeable, there is the ability to restore a secure default setting, and any and all changes can only be performed by authorized individuals, services, and other IoT product components.” Evaluating requirements like this requires expertise in evaluating evidence, but it isn’t “testing” in the usual sense of the word.

However, there is one area where testing (or “technical evaluation”, anyway) does make sense: That is learning about vulnerabilities in the product. The best way to learn about vulnerabilities in software or intelligent devices is to look up the product in a vulnerability database. But there’s one little problem with that, which I pointed out in my last post: If the manufacturer isn’t reporting vulnerabilities in their device to CVE.org (which after that provides the information to the NVD), you will always get the same error message when you look up the device: “There are 0 matching records.” That might mean the device has no vulnerabilities, but it also may mean the device has 40,000 vulnerabilities, as in an example I provided in this post in 2022.

So, what percentage of device manufacturers are not reporting vulnerabilities in their devices? I’d love to do a rigorous survey of device manufacturers, but I regret to say that my fondness for food and housing for my family prevents my having the resources to do that. In my last post, I did share the results of a short “survey” I did, which indicates – nay, proves – that, of the top ten medical device makers,

·        Four don’t report vulnerabilities for any of their devices;

·        Four report a smattering of vulnerabilities (in one case, just two vulnerabilities reported for all the versions of all the devices sold by this manufacturer); and

·        The other two are huge companies that have many product lines of all types. It would require a big effort just to learn which CPE names in the NVD refer to one of those companies’ medical devices. So, I couldn’t get data on them.

I will also point out that a product security specialist for one of the top five medical device makers told me last year they have never reported a vulnerability for any of their devices (they are in the first category above). And if medical device makers aren’t reporting vulnerabilities (even though they’re regulated for cybersecurity by the FDA. Of course, vulnerability reporting is not required of them), is it likely that baby monitor makers are doing that?

So, I think device makers should be required to report device vulnerabilities to CVE.org, in order to receive the label. This is a golden opportunity.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

 

Sunday, January 7, 2024

How will the FCC construct their IoT Device Registry/Vulnerability Database?

My most recent post discussed the FCC’s desire to create an IoT Device Registry, in conjunction with the future IoT device cybersecurity labeling program that was originally ordered in Executive Order 14028 in May 2021. A centerpiece of the Registry will be a list of vulnerabilities applicable to the device. I think this is something that’s needed. However, the question is whether something like it is in place now, and if not, how it can be implemented.

The only existing vulnerability database for devices is the National Vulnerability Database (NVD). A vulnerability database is only useful if it’s easy to find vulnerabilities for a product in it, but in fact it isn’t easy to find IoT devices (or software products, for that matter) in the NVD at all. In the previous post, I discussed the many problems with the CPE identifier on which the NVD is based. I also pointed out a different identifier that is already heavily used to track intelligent devices (and a lot of other things) in international trade: GTIN, one of the GS1 standards. Could the NVD adopt GTIN for devices?

The answer is that it could, but it would take many years and probably a large dollop of private sector support – in terms of time, talent and money. These are the main steps that would be required (I know about these, since the OWASP SBOM Forum has proposed adding purl identifiers to the NVD):

1.      The data in the NVD comes from CVE Reports, which are created by CVE Numbering Authorities (CNAs) and submitted to CVE.org. When a software supplier or intelligent device manufacturer wants to report a vulnerability in one of their products, they contact a CNA to create the CVE report for them. The CVE report contains a CVE number assigned by the CNA, as well as the common name(s) of one or more products that are affected by the vulnerability.

2.      The data from the report is entered in the CVE.org database and then “flows down” to the NVD, which is run by a team from NIST. A member of that team creates a CPE name for the product in the report. The information from the report is added to the NVD.

3.      If GTINs were to be added to the NVD, they would need to be included in the CVE report; that is, the CVE report would need to be able to list the common name of an intelligent device as well as its GTIN (which should always be known to the manufacturer). That requires a change to the CVE JSON schema, on which the CVE Report is based.

4.      The most recent version of the CVE JSON schema is 5.0, although the NVD has not yet implemented it (I was told they’re having some problems doing that).

5.      The next version of the spec will be 5.1, which includes a pull request that the SBOM Forum made in 2022, to add purl to the CVE Report. That request was accepted, but given the delays in just getting v5.0 implemented and the need to implement 5.1 next, it will be at least a couple years before the 5.1 spec is implemented in the NVD, and purls can be included in CVE reports. I’m sure it will be 1-2 years after that before the next version of the spec (presumably 5.2) could be implemented, including (hypothetically) support for GTINs.

6.      Moreover, just implementing the 5.2 spec won’t immediately add GTIN to the NVD, since new CVE reports including GTINs would have to be entered to do that. The existing CVEs for IoT devices won’t be linked to GTINs, unless someone is willing to conduct a long (and manual) effort to add GTINs to those reports.

In other words, adding GTIN to the NVD will be a long, expensive process – and that’s the good news. The bad news is that it wouldn’t fix the biggest problem that faces any vulnerability database for intelligent devices: the lack of vulnerabilities. By that, I mean that a huge percentage of intelligent device manufacturers (very likely the great majority of them) never report vulnerabilities at all, and the rest probably seriously underreport them (in case you didn’t know this, almost all CVEs are reported by the supplier of the software or the device).

How do I know that device vulnerabilities are being seriously underreported? Because if a device manufacturer isn’t listed on any CPE names in the NVD, this means they have never reported a vulnerability for any of their devices; a search on the manufacturer’s name will yield “There are 0 matching records.”

Since I didn’t have time to go through the hundreds of thousands of CPE names in the NVD to determine which were IoT devices, I decided just to focus on medical devices. If any device makers are reporting vulnerabilities (and remember, when a vulnerability is reported, there is almost always a patch available for it. Very few software or device suppliers would report an unpatched vulnerability), it should be the medical device makers, right? After all, in the US, medical devices are one of the few device types that are subject to cybersecurity regulations (from the FDA).

I got a list of the top ten medical device makers and checked for every one of them in the NVD (I checked multiple ways of entering each name, since there’s no way to be sure exactly how the manufacturer is listed in CPE names). For the top ten, I found:

·        Four have never reported a vulnerability (one of them had already told me that. The person also said they wouldn’t know how to report a vulnerability if they had to).

·        Four have reported a suspiciously low number of vulnerabilities (in the low hundreds of CPEs, which could in fact mean just 25-50 CVEs. Note that a CVE is always reported by product and version number. If a device has three vulnerabilities in five versions each, there should be 15 CPE names in the NVD, just for that device), meaning they’re undoubtedly not reporting many of the device vulnerabilities they know about.

·        The remaining two companies are huge conglomerates who sell lots of different devices and software. There are a lot of CPEs reported, but there’s no good way to find out which ones are for medical devices, if in fact any are for medical devices.

And remember, these are medical devices, whose proper functioning is often required to keep people alive. If they’re not reporting vulnerabilities, is it likely that manufacturers of other devices are? Of course not (the only other type of device I checked was baby monitors. I couldn’t find any of those manufacturers who had reported vulnerabilities).

In the first paragraph of this post, I asked two questions. One is whether there’s an existing IoT vulnerability database that the FCC could piggyback on (or imitate) to implement their desired Device Registry. The NVD is the only existing IoT vulnerability database, but it is based on an identifier, CPE, that poses big problems. Plus, fixing the CPE problem – by implementing GTIN identifiers – will take many years, if it will succeed at all (and it’s not at all certain that NIST will even be interested in adding a new identifier to the NVD. They may think that CPE is just fine, thank you very much). The NVD can’t be the basis for a new IoT vulnerability database.

So, the FCC needs a new vulnerability database for IoT products. While creating a new vulnerability database sounds hard, the fact is that new vulnerability databases pop up all the time. One reason that changes take so long with the NVD is that portions of the database are 10-20 years old, so there’s a huge amount of legacy baggage that a new database wouldn’t have to deal with. A new IoT vulnerability database would need to download all the device data from the NVD and update it continually. Since the entire NVD can be downloaded in much less than an hour (I’m told), this should be the least of the problems.

The big problem for the Device Registry will be getting device manufacturers to start reporting vulnerabilities, since – except for a few device makers like Cisco – they really aren’t doing that today. However, device makers won’t be able to report vulnerabilities to CVE.org today, since GTINs aren’t supported in CVE reports now, and – as I’ve already documented – changing the CVE JSON spec will require several years.

How about if there were a new vulnerability type, perhaps operated by CVE.org but perhaps not? This might resemble the current CVE spec, but it will have to differ from it if it includes GTINs. Why not design a new vulnerability type that might be more suited to intelligent devices and might also make the device makers more interested in reporting vulnerabilities?

I have no idea what this new vulnerability spec would look like, but I do know that, if the FCC wants to have a way for device users (and prospective users) to learn about vulnerabilities in intelligent devices, there needs to be an easy-to-use database, which has good data on device vulnerabilities. Neither of these is the case today.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.

 

Friday, January 5, 2024

What would an IoT vulnerability database look like?

I recently wrote about the FCC’s Notice of Proposed Rulemaking for an IoT device cybersecurity labeling program, issued last July and due to be updated around now (the program itself isn’t planned to be in place before the end of 2024). One of their suggestions in the NOPR is a “device registry”, i.e. a database where users and prospective users of IoT devices can go to learn about both product features and vulnerabilities, and use the information they find to compare different products before purchase.

Regarding product features, I don’t see any point in making a big effort to gather them into a single database, since probably every device manufacturer has a website where current and prospective customers can learn all they want to about their products. I know part of the FCC’s idea is that the device registry will make it easy to compare products, since the user will be able to choose two or more devices and compare their specifications side by side.

This is a nice idea, but it’s fantasy. If you’ve ever tried to compare products (of any sort) from two manufacturers by looking at their websites, you’ll know it’s very hard to make any straightforward comparison, because they almost never report exactly the same types of information in the same way (unless they’re mandated to do so, of course). This is no accident, since the last thing a manufacturer wants you to do is be able to hone in immediately on the advantage their biggest competitor has over them.

And in case you think that’s all the more reason to include “standardized” product information in the FCC’s device registry, I’ll just say that doing this is going to require a small army of researchers (or even a large one). Besides their salaries, those researchers will have to have a huge budget for buying and testing every new version of every device listed in the registry. If these were devices with military use or with significant safety impact, the cost might be justified. But for the great majority of IoT and IIoT devices, it isn’t justified at all.

But that leaves vulnerabilities. Regarding the idea of reporting vulnerabilities for devices in the registry, the FCC says in their NOPR, “We propose that consumers be made aware of any vulnerabilities….through the IoT registry.” At first, I pooh-poohed this statement. The purpose of a vulnerability database is not to inform users (along with every hacker in the world) of unpatched vulnerabilities in particular products. Rather, the purpose is for the supplier of the product to announce, after they have developed a patch for a vulnerability, which version(s) of the product are affected by that vulnerability and what the mitigation is for each of those versions – either applying a patch or upgrading to a new version that includes the patch.

You might ask what is the purpose of having a device registry for vulnerabilities, if the only vulnerabilities listed will be ones that have a patch available and therefore don’t pose a threat to a user that has applied the patch? For one thing, there will always be users who haven’t applied the patch, perhaps because they didn’t see the manufacturer’s notification of the vulnerability, which was sent to customers by email or posted on the manufacturer’s website.

But this reason alone isn’t a justification for the device registry, since there is already a “device registry” for vulnerabilities in IoT devices: It’s called the National Vulnerability Database (NVD). Why would the FCC want to go beyond what the NVD currently offers for identifying vulnerabilities in IoT devices? And, if the FCC decided to go beyond what the NVD now offers, where would they go? Are there any other vulnerability databases for IoT products?

To answer the last question first, I don’t believe there’s any other vulnerability database for IoT products other than the NVD or databases based on the NVD, like VulnDB. VulnDB (which has a very good reputation, by the way) starts from the current NVD data, but adds other vulnerabilities to the set of CVEs currently found in the NVD; they also clean up some of the NVD’s data. But they also charge for their services and most importantly are subject to what I believe is the primary problem with the NVD: how software and IoT products are identified in it.

The only way to identify any product in the NVD is by using a CPE (Common Platform Enumeration) name, which poses a lot of limitations for the user trying to find vulnerabilities in an IoT device. Even though CPE names are based on a rigid specification, they are not created by any automatic process. Instead, they are created by individual NIST staff members who are part of the NVD team.

Breaking news: Like all other people, NIST staff members make mistakes. This means that someone who is looking to find a particular device is not likely to find it if the NIST person made even a simple mistake in creating the CPE name. Unlike Google, the NVD doesn’t use fuzzy logic or AI to come up with suggestions for what you might be looking for. To get a usable result, you must enter an exact match to something in the database. Otherwise, you’re SOL (don’t ask what that means. It’s a technical term).

What if the NIST staff member didn’t make a mistake? Will it almost always be easy to find the CPE name you’re looking for? No. CPE names are created using the information that is included in a CVE report, which is created by one of the 300+ Certified Numbering Authorities (CNAs) that work on behalf of CVE.org. The CNA almost always gets the information to put in the report from someone who works for the manufacturer of the device (note that most CNAs are themselves software developers like Oracle and Red Hat). And believe it or not, those people aren’t perfectly consistent among themselves, either.

For example, suppose that in February, Jim Jones of Device Manufacturer A reports to a CNA that CVE-2024-12345 is found in the device that he calls “Router XYZ version 2.10.5”; that information is included in a CVE Report - which initially is sent to CVE.org, but from there flows down to the NVD. The CPE name listed as affected by that CVE is then created (by a NIST employee on the NVD team) using the information that Jim provided. But in March, Sue Smith of Device Manufacturer A reports that CVE-2024-23456 is found in the device she calls “Router_XYZ vsn. 02.10.05”. Is this the same device? After all, both the product name and the version string are different.

Rather than take a chance of being wrong if they decide that both vulnerabilities apply to the same device, the NIST NVD team member is likely to construct the two CPEs using different product names and version numbers – i.e., they will create a different CPE for each CVE report.

Of course, if Jim and Sue were referring to different versions of different products, that would be fine. But, if they were actually referring to the same version of the same product, that wouldn’t be fine. In that case, searching on one of the CPE names will just yield the one vulnerability that applies to that name. When someone searches for “Router XYZ version 2.10.5”, they will only find CVE-2024-12345. But when they search for “Router_XYZ v2.10.05”, they will only find CVE-2024-23456.

Thus, if the "two devices" are actually the same, a user of that device will not be able to learn about both vulnerabilities, unless they somehow know that the same product is listed under two CPE names. And how would they ever learn that there are two names (by the way, this same problem would occur if the NIST employee tried to use the first CPE name in the second CVE report, but instead made a typo. There would be two CPE names for the device, each listing just one vulnerability).

How about a user mistake? What happens if a customer of Device Manufacturer A fat fingers their NVD search and enters “Routet XYZ version 2.10.5”, not “Router XYZ version 2.10.5”? Instead of being told that CVE-2024-12345 is applicable to that product, will that person be told they probably mis-entered the product name? No, they won’t. They will see the message “There are 0 matching records.”

Unfortunately, this message appears just about every time a search doesn’t return any vulnerabilities. It could mean many different things, including:

a.      There is a CPE name that closely matches the information entered, but there is one small difference (perhaps “XYZ Inc.” instead of “XYZ, Inc.”) that makes it appear to be a different name. This might be due to a batch of product labels that omitted the comma in the company name, to a mistake made by the NIST person that created the CPE, or to a typo by the user executing the search – as well as other things.

b.      The device has been sold to a different manufacturer, meaning the CPE name for current and future vulnerabilities for that product/version has changed to include the new manufacturer name. Since the user doing the search is entering the name of the previous manufacturer, they can’t find any current vulnerabilities, because they are all entered using the new manufacturer’s name.

c.      The device is still made by the same manufacturer, but the marketing department has decided that “Router XYZ” isn’t the most scintillating name for it. They have decided that “SupeRouter” is much better. So, searching on “Router XYZ” now only identifies old versions of the product.

d.      The manufacturer decided to change how they represent product versions, so that minor versions are indicated with a letter and the patch level is prefaced by “Patch”. When the manufacturer introduces a new major version 3, instead of being referred to as “version 3.0.0”, it is referenced as “version 3A Patch 0”. If someone has an old version of the device and wants to find the new major version (and hasn’t been following the manufacturer’s announcements religiously), they will enter “version 3.0.0” and just see “There are 0 matching records.”

e.      The device searched for is completely vulnerability free (therefore, there are no matching CVE records).

f.       The device searched for is loaded with vulnerabilities, but the manufacturer has never reported a single one of them. Because a CPE name is only created when a CVE report refers to that device, no search for that device will ever turn up any matching records.

The last two reasons for the “There are 0 matching records” message are the most telling: The NVD will provide the same message when a user searches for a product that has no vulnerabilities as it does when the product is loaded with vulnerabilities (40,000 vulnerabilities in one case, as described in the post just referenced), but the manufacturer has never reported any of them. Unfortunately, most people who search the NVD are likely to be anticipating a good outcome: the product they’re searching for doesn’t have any vulnerabilities. They will always be pleased with this result, but their happiness will only be justified half the time.

The above should show that CPE is simply not a good identifier for IoT devices. What identifier would be better? I have been discussing purl a lot, since it is currently the only other identifier used in vulnerability databases, besides CPE. The OWASP SBOM Forum made purl the centerpiece of the “proposal” we put together in 2022 to (ultimately) replace CPE in the NVD. The best thing about purl is that there's no central database. 

To find an open source product in a vulnerability database based on purl, such as OSS Index, the user just needs to know the URL for the repository from which they downloaded the software (e.g. a package manager), and the name and version string of the project in that repository. However, purl can only be used with software that can be downloaded from a defined internet location. So far, nobody has been able to demonstrate to me a way to download a smart thermostat by clicking on a URL.

When the SBOM Forum put together the proposal in 2022, we would have preferred to find an “intrinsic” identifier (i.e., one that didn’t require a central database. See pages 7-9 of the proposal) for intelligent devices (purl is the poster child for intrinsic identifiers). Of course, there are none today.

However, another quality we were looking for in an identifier for IoT was that it would already be in use to identify devices. That is why Steve Springett suggested the GS1 standards, which the world knows because they include standards for barcodes (see pages 12-14 of the proposal); UPC is one of those standards. These are the primary standards used for commerce, for intelligent devices as well as many other products.

The big advantage of using these standards – specifically, GTIN and possibly GMN - is that, since they’re already so widely implemented, it should not be hard to learn the GTIN of a device you already have. It might be printed on the barcode label, but otherwise you can scan the FCC’s device cybersecurity label (which, in case you hadn’t guessed, will consist of a QR code attached to the device. It will take you to a website with the "label" information) – or else scan the barcode itself.

The disadvantage of using the GS1 standards is that, if a manufacturer doesn’t already have a barcode for their product, they will have to pay a fee to GS1 to obtain one (and perhaps to license its use annually). I would assume that’s a modest fee, but if a manufacturer can’t pay it, there would need to be some alternative way to assign the product a numeric code,

However, if the FCC decides to use the GS1 standards to identify IoT devices in the Device Registry, this means the registry can’t be based on the NVD, although it can certainly incorporate CVE data and CPE names from the NVD (as well as update them daily or even more frequently). And since there is no alternative vulnerability database for IoT devices, this means the FCC will need to develop their own, although probably with some help from interested third parties.

The FCC will also need to require that device manufacturers start taking steps they are seldom taking today (presumably, this would be a condition for receiving the label). More on that topic in a post that’s coming soon to a blog near you.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

I lead the OWASP SBOM Forum. If you would like to join or contribute to our group, please go here, or email me with any questions.