Thursday, April 28, 2022

SBOMs and Attestations


I had to miss S4 last week, but I did hear a lot about it from attendees; as usual, it was very good. There was a lot of discussion of SBOMs, but I heard a couple of ideas that supposedly met with a lot of approval, which strike me as completely wrong. I’ll address the first of those in this post, and the second in another post soon.

The first disturbing thing I heard was that a lot of people supported the idea that they don’t need SBOMs to accomplish their vulnerability management goals, but just attestations. I’ve said that users don’t need SBOMs (or VEXes), but that was just by way of saying that what software users do need is the risk analysis that requires SBOMs.

In other words, you don’t need to have SBOMs, but you do need to learn about exploitable vulnerabilities due to components in your software. You will be able to learn that information yourself by receiving SBOMs and VEXes and feeding them into the proper tool, but there will also be third parties that will perform that analysis for you (and many other organizations) as a service.

However, what it seems the people at S4 were saying was, “I don’t need to track component vulnerabilities at all, in the software products my organization uses. I just need to get an attestation from the supplier that the software doesn’t have vulnerabilities. Then I can show the attestation to my regulator, compliance department, or whoever’s bugging me about how safe my software is. This will make them shut up, and I can go back to doing the other 9,999 things I have to do.”

Of course, a conscientious supplier would never give you such an attestation in the first place. Even though the supplier might honestly believe it’s completely true today, tomorrow they might find out that the product has Log4j all over the place – and has had it for years.

 I’m sure there are suppliers who will give you such an attestation. But keep in mind that:

1.      The average software product has well over 100 components. Many have thousands.

2.      Each component has probably as much chance of having a serious vulnerability as does the code written by the software developer itself. So the likelihood is that there will be many times more vulnerabilities in components in your software, than are listed in the NVD for the software itself.

3.      A 2017 study by Veracode found that “A mere 52 percent of companies reported they provide security fixes to components when new security vulnerabilities are discovered.” It also found that “..only 23 percent reported testing for vulnerabilities in components at every release.”

4.      Of course, one would hope that those numbers have improved since that time, but if you think all software suppliers are now diligently patching every vulnerability in a component in their product – or even checking for component vulnerabilities…well, I think you need to think again.

This isn’t to say that the average software developer is trying to pull a fast one on you. But the fact is that we’re all busy, and we all put off things that nobody is bothering us about. If your customers don’t have SBOMs and therefore don’t know what components are in your product, your phone isn’t going to be ringing off the hook whenever a serious vulnerability is discovered in one of those components. Of course, you’ll get around to fixing those vulnerabilities, no question. Right after you get the next release out…

Meanwhile, if a customer asks for an attestation that your software is vulnerability-free, and since the NVD doesn’t currently show any vulnerabilities for it…well, you’ll be happy to sign that, because after all…you know you’ll get to those component vulnerabilities sooner or later.

So take these attestations for what they’re worth, which is not very much. If you really want to learn about the vulnerabilities lurking in components in your software, you need SBOMs to tell you (or your chosen third party service provider) what those components are. And you need VEX documents to tell you which of those component vulnerabilities are exploitable. Accept no substitutes! 

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Monday, April 25, 2022

Lew Folkerth is still at it!


If you’re involved with the electric power industry and you’ve been reading this blog for a while, you’re undoubtedly familiar with Lew Folkerth. Lew is Principal Reliability Consultant with the RF region of NERC and is probably the most respected authority on the NERC CIP standards. But more importantly, he’s a great teacher on those standards and he places everything he says in the context of cybersecurity and risk management (he’s very knowledgeable about both subjects).

Lew writes a column on NERC CIP in RF’s newsletter, which is published quarterly. Since the newsletters are big files, Lew also publishes his columns separately. You can access them by going here and dropping down the menu for Standards and Compliance at the bottom. Under Outreach, you’ll see a link to every one of his columns since he started writing them in 2014. And BTW, you’ll also see the link to the slides for the talk I gave on SBOMs and CIP-013 compliance at RF’s March Tech Talk.

Most importantly, Lew has just put together, for the first time since 2019, a single file with all of his columns. Here are some of my favorites, starting this year and moving back (page numbers refer to the PDF itself, not the numbers at the bottom of each page):

1.      BCSI Revisions  (page 127) – this is an excellent article (published in Q1) discussing the revisions to CIP-004 and CIP-011 to update the protections for BES Cyber System Information, including BCSI in the cloud.

2.      Using Advanced IT Technologies in an OT Environment Part 2 – Containers (page 121) – another excellent article that both gives a great introduction to containers and describes how you can utilize containers within your Electronic Security Perimeter, yet still be in compliance with the CIP requirements. I had never thought this was possible.

3.      Implied Requirements (page 117) – This is one of the endearing “features” of the NERC CIP requirements – there are so many requirements that are implicit. Because they’re implicit, you can’t receive a violation for missing them, but missing them will put you out of compliance with other requirements. I wrote about implicit requirements several times, including here and here.

4.      Incident Response and Incident Management (page 115)

5.      CIP-012-1 In-Depth (page 104), followed by a very detailed accompanying article starting on page 106

Of course, if you get hooked on Lew, you should subscribe to the RF newsletter, which has a lot of other interesting articles besides Lew’s columns.

 

If you’re with a NERC entity or an IT or OT supplier to the power industry, I’d love to have a discussion with you about CIP-013 and supply chain cybersecurity. Please drop me an email.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

Wednesday, April 20, 2022

Hmm…It seems SBOMs might become big one of these days…


I’ve written about Steve Springett before. As leader or co-leader of the OWASP CycloneDX and Dependency-Track projects, but even more importantly as a real innovator, I rank him as one of the two most important people (along with Allan Friedman) in the SBOM “movement”. I’ve written about him a number of times, including recently in this post and this one.

One of Steve’s traits is that he’s quite unassuming. So I was completely blown away when, during a small meeting with other SBOM fans last week, he repeated a number he had heard from a representative of Sonatype recently. Sonatype is probably one of the three most important companies in the SBOM community. They are one of the leading software composition analysis (SCA) tool vendors – but they also operate OSS Index, the leading database of vulnerabilities in open source software.

I’ve written about Steve’s Dependency-Track project several times. He initiated it in 2012, years before the term SBOM started to be used. It was the first – and still is the leading – open source tool that will a) parse an SBOM for a software product and identify the components; b) search in a database like the National Vulnerability Database (NVD) or OSS Index for vulnerabilities that apply to those components; and c) track and update that list as often as needed. And while there are certainly other software risk analysis use cases for SBOMs, this is IMHO the most important one.

Back to the story. Last week, Steve was talking to a person from Sonatype about a different subject and wondered out loud how many inquiries Dependency-Track initiates to OSS Index every month; I believe this was the first time he’d even asked this question of Sonatype. Since DT is an open source project, he just knows how many times it’s been downloaded, but not now many copies are being used, or how often they’re being used.

The person checked and emailed him later: In the previous 30 days, instances of Dependency-Track initiated 202 million requests to OSS Index.

Note from Tom Sept. 25: Steve reported new numbers from Sonatype for a recent month (I think it's August, but maybe July). The number of requests from Dependency-Track instances to OSS Index has grown from 202 million to 270 million per month - which is an increase of close to one third since March. If requests continue to grow at that rate, they'll double every year and a half.

Using the same estimate of 100 components per request (since a request from D-T is usually for all of the open source components in an SBOM), this means D-T instances are requesting information on vulnerabilities found in 27 billion components per month.

Does this mean there are 202 million users of Dependency-Track? No. Steve says the average DT system makes a few hundred requests to OSS Index every day. If we assume 250 requests/day over 30 days, we get around 27,000 users. That’s a good number, but what’s most impressive is how much the average system is being used. Even though most of the queries are generated automatically, this shows that somebody finds the analysis that DT performs to be very useful. And BTW, DT isn’t the only tool that does this. Sonatype and the other SCA tools almost certainly generate more requests every day than does DT by itself. It’s a safe bet that there are at least 50,000 organizations that are using SBOMs for vulnerability management purposes today.

Who are those users? Steve and I agree that the great majority of them are developers. Of course, they use DT because they want to know about vulnerabilities found in the components in their products. Armed with this information, the supplier can patch the serious vulnerabilities. Moreover, if a component seems especially prone to vulnerabilities or if the component supplier is slow about issuing patches themselves, they can replace the component altogether.

But Steve says DT has a small but growing number of what I call “pure” end user organizations. I use that adjective not because I think those organizations are without sin, but because they aren’t primarily in the business of developing software. Instead, they sell insurance, fly airplanes, mine copper, build houses…you know, the rest of us. Of course, I don’t think there are too many organizations nowadays - other than maybe my neighborhood dry cleaners - that don’t develop software in one way or another. But it’s safe to say that the great majority of software products that these organizations buy or download are used to help them achieve whatever their mission is, not as components of products that they provide to other organizations or to individual consumers.

Currently, end user organizations use SBOMs very little. The immediate reason for that is their suppliers aren’t offering them SBOMS. And the reason for that is the end users aren’t asking for them. Why aren’t they asking for them? It’s in large part because there are only a handful of low cost or open source tools or services currently available, that allow them to easily utilize SBOMs for component vulnerability management.

Dependency-Track is by far the best of the open source tools, but for some of us, installing and maintaining an open source tool is a challenge. And there’s another drawback at the moment: neither DT nor any other SBOM consumption tool today can ingest VEX documents. As you may know, VEXes will make the vulnerability tracking process much more manageable; this is because VEXes will ultimately remove the need to track component vulnerabilities that aren’t actually exploitable in the product itself. Steve says DT will be able to ingest VEX documents by this summer, and I know that at least one “SBOM processing” cloud service also plans to do that by this summer. I’m sure many more tools and services will follow, even if they’re not ready that soon.

But this will change. In August, every US federal agency will be required, per Executive Order 14028 from last May, to start asking their software suppliers for SBOMs. And they’ll get them, since – as we’ve just seen – huge numbers of suppliers are already producing SBOMs so they can make their own software more secure (and probably some are also trying out SBOMs in order to learn how to produce them, in preparation for compliance with the EO). The suppliers will make their SBOMs available in August, although VEX documents won’t be available in any sort of volume until the fall at the earliest.

I’ll be honest: I really don’t see SBOMs being utilized much by “pure” end users, outside of tests and proofs of concept, until early 2023. But they will be utilized, and ultimately in huge volumes. I say this because of Steve’s number: at least 27,000 organizations making 202 million inquiries a month, using just one tool, to one of the two major vulnerability databases. These numbers show that suppliers aren’t just testing the waters with SBOMs; they’re using them in volume because they’ve found they really are useful for managing software vulnerabilities. When the consumer tools and services are available (as well as VEX documents and more written guidance), the end users will certainly jump in – although not all at once.

What happens when the end users start jumping in, and how many of them will there be? First, what do you think is the ratio between software suppliers and pure end user organizations? Doing a little internet checking, I’m going to guess it’s on the order of at least 1 to 50. So at a minimum, there will be 50,000 * 50 = 2.5 million organizations using SBOMs for software risk management in the US alone in the next say 5-10 years. But even that number could be low. As an old Danish proverb says, “Prediction is difficult, especially about the future.”

So the SBOM/VEX “market” will be a huge one. Can we just sit back and wait for it to materialize? No, there are a number of obstacles that need to be removed (the lack of consumer tooling and services is just one of the largest of these). But do you remember – or maybe you don’t – that people like me said this about the internet in the early 1990s? “Oh, there are so many obstacles: It’s so difficult to use, with all of these IP addresses you have to remember...I haven’t heard of anything I need the internet for…loading a web page with even a top-of-the-line 56K modem is sooo slow…If you want real internet access (not AOL), you have to deal with strange, geeky companies and learn almost a whole new language…etc.

Believe me, I said every one of those things – because they were real problems, and they were certainly obstacles for my adopting the internet (not that I needed obstacles, since I’m slow to adopt any new technology. I believe I was the last person in the western hemisphere to get a cell phone). But in 1995, the Netscape IPO changed my and a lot of people’s minds. It showed there were a lot of possibilities that people like me just didn’t realize, and that the obstacles were being removed because – dare I say it? – there was real money to be made. And so it’s turned out. I’m fairly sure that, within the next 12 months, a similar financing event – IPO or otherwise – will convince a lot of the skeptics that SBOMs are real as well (in fact, an announcement today isn't exactly the Netscape IPO, but it shows SBOMs are no longer just something for geeks to play with. Congratulations, Fortress team!).

Because SBOMs are real, and now Steve has the numbers to prove it. To quote him, “I think this proves that 1) SBOMs are useful, and 2) they can be operationalized at massive scale.”

Amen, Brother Steve!

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

Sunday, April 17, 2022

Has CIP-013 worked?

 

When FERC mandated in July 2016 that NERC develop a supply chain cybersecurity risk management standard, just about everyone (including me) focused on the fact that it was probably the first supply chain cybersecurity standard outside of the military – in the US and possibly anywhere. This was true, but NERC CIP-013 (produced in response to FERC’s order) was also one of the first cybersecurity risk management standards: i.e., a standard whose explicit goal was establishment of a cyber risk management program. I think that will be CIP-013’s most important legacy. This is why I say that:

A brief, admittedly biased history of the CIP standards

No NERC standards before CIP version 1 had even mentioned anything about risk. And why should they have? They were all about physical actions: trimming trees under Transmission lines, balancing load and supply in real time, etc. The results of these actions could be objectively measured and needed to be kept within certain operating limits. Risk management had nothing to do with those standards. They were based ultimately on the laws of physics, which in themselves don’t understand risk.

However, the team that drafted CIP version 1 knew that cybersecurity was different. They knew there’s just about zero historical data that will let you make predictions like, “If we decrease our patching interval from 45 to 30 days, our chances of suffering a cyberattack – that could cause an adverse grid event – will go down by 15%.”

Instead, the only thing you can say with certainty is, “Patching more frequently will in general lower the risk that we will be compromised through an unpatched software vulnerability.” Does that mean a utility should patch every hour, even if it means shutting down the rest of its operations and putting its entire staff to work on patch management? No. Like every organization, electric utilities have limited resources and have many other risks (both cyber and otherwise) that they need to manage. They need to balance their various risk management activities against each other, so they can reduce as much overall risk as possible, given their available resources. Oh, and they have to keep the lights on at the same time.

In other words, cybersecurity is at heart about risk management. An important principle of risk management is accepting risks that are too expensive to mitigate, when weighed against the benefits that would be realized by mitigating them. Sure, a meteor strike on your headquarters would have a devastating effect on an organization (to say nothing of their people). Does that mean that every organization in the world should spend every extra dime (shekel, euro, rupee, etc.) they have on protecting their headquarters against a meteor strike?

No, because risk is a combination of likelihood and impact. The impact of a meteor strike would be huge, but the likelihood is so small that the risk itself is close to zero. No organization that I know of spends anything to fortify their headquarters (or anything else) against meteor strikes. By the same token, there’s no amount of spending on cybersecurity that would allow any organization – electric utility, dry cleaners, pizza parlor, the US military, etc. – to say it was perfectly cybersecure. There ain’t no such thing. There will always be some residual risk that the organization needs to accept.

For this reason, the CIP v1 drafting team included in many of the CIP v1 requirements the words “…or acceptance of risk…” In other words, if the utility believes that the cost of fully complying with the words of the requirement would outweigh whatever benefits (in risk reduction) would be achieved by compliance – and can document their reasons - they would have the option of not fully complying with those words. And they wouldn’t be held in violation of the requirement.

But FERC wouldn’t have any of this. When they approved CIP v1 – after 17 months of consideration – in early 2008, they said, somewhere in their 800 (or so)-page Order 706, that they wanted NERC to start work on a new version that would, among many other things, eliminate the wording about acceptance of risk. And in the meantime, NERC needed to audit as if those forbidden words weren’t there (by the way, if you want to see a very impressionistic history of NERC CIP that I wrote in 2018, go here).

But this caused a problem: The team that drafted CIP v1, believing that the language about acceptance of risk would remain in the requirements they were drafting, deliberately made the requirements quite prescriptive. After all, if a NERC entity found they couldn’t follow the prescriptive wording of the requirement, they could always just accept the risk. What could possibly go wrong?

As it turns out, a lot. By eliminating  “acceptance of risk” but not changing the overly prescriptive nature of the CIP v1 requirements, FERC left NERC with the worst of both worlds: hard-edged prescriptive requirements with no means of “softening” them through risk considerations.

Over the first four or five years of CIP enforcement – and as CIP v1 was replaced by v2 and then v3 - there were loud and growing complaints from NERC entities about the amount of time and money they were being required to spend to comply with rigidly prescriptive requirements. Yea, great was the weeping and wailing and gnashing of teeth over NERC’s “zero-tolerance” (i.e. non-risk-based) enforcement of the CIP requirements.

It wasn’t until CIP version 5 came into effect in 2017 that there were any CIP requirements (let alone entire standards) that at least implicitly allowed NERC entities to take risk into account: these pioneering requirements, both part of CIP v5 (which was a complete rewrite of all the CIP standards), were CIP-011-5 R1 (Information Protection) and CIP-007-5 R3 (Anti-Malware)[i].

But the watershed event in moving to a risk management approach was FERC Order 829, which ordered NERC to develop a supply chain security standard in July 2016. Even though NERC had never – since their unfortunate experience with “acceptance of risk” - officially referred to “risk” in any standard, in Order 829, FERC specifically called for a “supply chain (cyber) risk management” (my emphasis) standard. And they specifically warned against “one size fits all” – i.e. prescriptive – requirements.

Why this change of heart? For one thing, FERC certainly had heard the complaints about the then-current standards (when they issued Order 829 in 2016, the industry was still complying with CIP version 3, although they were preparing for CIP v5, which came into effect on July 1, 2017), and knew that the last thing NERC entities needed was another highly prescriptive CIP standard.

But I believe (and believed in 2016) that the main reason why FERC wanted the new standard to be risk-based was because there was simply no other way to do it. Remember, even though the burden of compliance with CIP-013 falls on electric utilities (and other bulk electric system entities like federal power marketing agencies and independent power producers), the standard is really aimed at the suppliers of the hardware, software and services that those utilities rely on to operate the BES.

Let’s say a utility decided to hold all of those suppliers to a very high standard of cybersecurity, e.g. ISO 27001. For some suppliers, such as the supplier of the energy management system (EMS) that literally runs the utility’s own corner of the grid, it makes sense to require this. However, for other suppliers, such as perhaps the vendor of maintenance services in a power plant, this would be overkill.

If the utility tried to force the maintenance services vendor to comply with ISO 27001, they would quickly find themselves looking for a new vendor. A vendor isn’t going to spend many times the profit they may make from a customer in order not to lose that customer; they’ll simply try to find another customer (perhaps in another industry) that isn’t so demanding. And if the vendor can’t find another customer to replace the utility, they would still be much better off financially than if they’d agreed to pay for an ISO 27001 audit just to keep one customer.

And this is the problem with supply chain risk management: The organization has to convince their vendors to incur costs in order to keep them as a customer. They’ll never do that if they require them all to comply with the same high standards, rather than tailor their requirements to the degree of risk posed by each vendor. A supply chain risk management standard like CIP-013 has to be risk-based, if it is to have any chance of succeeding.

The NERC standards drafting team (SDT) took Order 829 to heart when they developed CIP-013. In fact, in my opinion, the SDT went a little too far. CIP-013 consists of a grand total of five sentences (although with a number of clauses), divided into three requirements:

1.      The first part of requirement R1 (R1.1) tells the utility to develop a “supply chain cyber security risk management plan(s)”. The plan needs to “..identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services resulting from: (i) procuring and installing vendor equipment and software; and (ii) transitions from one vendor(s) to another vendor(s)..”

2.      The second part of R1 (R1.2) identifies six risks that need to be included in the plan, such as “Disclosure by vendors of known vulnerabilities related to the products or services provided to the Responsible Entity”. While all six of these risks are important ones, they weren’t developed as some sort of comprehensive catalog. Rather, the drafting team (and I attended a number of their meetings) simply gathered in one place six statements about particular risks that FERC made at various random points in Order 829. These six risks are by no means the only six that need to be included in the utility’s supply chain cyber risk management plan. But these are the only risks that FERC specifically required to be included in the plan; the utility is free to decide for themselves what other risks should be included.

3.      R2 requires the utility to “..implement its supply chain cyber security risk management plan(s) specified in Requirement R1.” That’s literally all it says.

4.      R3 requires the utility to review its plan every 15 months.

I just summarized the entirety of CIP-013. The utility has lots of freedom to develop their supply chain cyber risk management in R1, but in R2 they have no freedom at all: they have to implement the plan as written. However, this isn’t as bad as it sounds: If the utility decides they made a mistake in their original plan – or if they realize that changed circumstances require a change in the plan - they’re free to change it at any time; they just have to document the changes they made and why they made them.

When NERC entities were starting to think about CIP-013 compliance, I know some of them made a very understandable mistake (especially for NERC entities): They focused on the six items in R1.2 and considered these to be the total of what’s required by CIP-013. After all, those six requirement parts were the most like the existing CIP requirements; why shouldn’t they be the only real “requirements” in CIP-013?

However, taking that attitude was based on completely ignoring the wording at the beginning of CIP-013 R1: “Each Responsible Entity shall develop one or more documented supply chain cyber security risk management plan(s)… The plan(s) shall include:..” This is followed by the text of R1.1 and R1.2, meaning that what’s described in both R1.1 and R1.2 needs to be included in the plan. So, without a doubt, the plan needs to include the six R1.2 items. But it also needs to include R1.1.

What threw these utilities off was probably the fact that R1.1 doesn’t prescribe anything in particular: It just requires the entity to develop a plan to “identify and assess” supply chain cybersecurity risks. In other words, it’s up to the entity themselves to determine what are the risks they face. Of course, that idea was a 180-degree change from all previous NERC standards. If you were complying with any of the CIP version 3 standards, or the FAC, BAL, etc. standards, and you told an auditor that you should be allowed to decide what goes in your compliance plan, not NERC, how far do you think that would get you?...You’re right – not very far at all.

For this reason, it’s understandable that NERC entities wouldn’t know what to do with a requirement that says it’s up to them to decide what goes into their supply chain cyber risk management plan. What’s to prevent an entity from simply including the six R1.2 items in their plan? In other words, how could they be found non-compliant if that’s all they put in their plan?

They can be found non-compliant because they’d have to convince the auditor, as part of their R1.1 compliance evidence, that they searched high and low to find any other supply chain cyber risks that applied to them, beyond the six risks in R1.2 – and they just couldn’t find any others at all. If I were the auditor and that evidence were prevented to me, I’d just ask some questions like:

1.      What about SolarWinds-type attacks? Don’t you think you should be concerned about those? More importantly, what have you done that makes you immune to such attacks?

2.      What about Log4j-based attacks? Have you determined that you don’t have any log4j at all in your environment (even in a component of a component of a component)?

3.      How about the risks identified in the NATF Criteria? They include the six R1.2 items, but they go well beyond those (there are 60 Criteria now), including:

a.      The risk that a software or device supplier won’t follow a secure development lifecycle (criterion #47);

b.      The risk that a supplier might install a backdoor while developing a product and not remove it before they ship it to you, leading to your being compromised (criterion #15);

c.      The risk that a product supplier might not conduct 7-year background checks on its employees (criterion #4); etc.

These are all supply chain cyber risks. The entity will have to convince the auditor that they at least considered all of these risks when they were developing their plan. And if they didn’t include them in the plan, they’ll have to provide some justification for not including at least each of the 60 NATF Criteria. From what I’ve heard about how CIP-013 is being audited, and from the presentations I’ve seen by regional auditors on the subject, I really don’t think an entity that only included the R1.2 risks in their plan wouldn’t find the auditor to be fairly skeptical.

In retrospect, it would have been good if the drafting team had included a list of say ten areas of supply chain security risk that the entity needs to consider in their plan; if they decide one or more of those areas don’t apply to them, they need to document that fact. These areas might include:

1.      Software supply chain risks, including the risk that a malicious party might have implanted a backdoor in the software while it was being developed, as in SolarWinds; or the risk that a serious vulnerability would be identified in a widely used open source component like Log4j, making it difficult even to find all the vulnerable instances on your network.

2.      Inadequate protection of the supplier’s remote access system (i.e. no MFA). DHS said in 2018 that at least 200 vendors to the power industry had been penetrated by the Russians through their remote access systems, in attempts to penetrate US electric utilities through them.

3.      Inadequate anti-phishing training and other anti-phishing measures, making the vendor a possible vector for attacks aimed at the utility. In early 2019, the Wall Street Journal published a great article on how the Russians were penetrating vendors to the power industry through phishing attacks, then utilizing that access to penetrate electric utilities; the article listed four utilities that had been penetrated this way.

Had the drafting team included this list in R1 (and I certainly never even thought to suggest it to them), this would have made it clear to utilities that the purpose of R1 was to allow them to decide the risks that were most important to them, given their configuration of assets and vendors. Being able to allocate your resources toward the risks that are most important is a key element of supply chain cyber risk management.

And there would have been another benefit: Including this list in the requirement itself would have made it auditable. That is, the auditors would have been able to determine whether or not the utility gave serious consideration to each of these areas of risk, based on the documentation they were shown. If the utility hadn’t even considered each of these areas, they would have been eligible for a PNC (potential non-compliance) finding in an audit.

However, as far as I know, a substantial majority of NERC entities (who have medium or high impact BES Cyber Systems, since they are the only ones in scope for CIP-013 so far) did a good job of identifying and assessing supply chain cyber risks in R1, in spite of there not being a list of risks in the requirement itself. This is because organizations – especially the North American Transmission Forum (NATF) – stepped up to provide the guidance that the drafting team didn’t want to include in the requirement itself.

To get back to the question in the title, has CIP-013 worked? Yes, it has. It’s worked both as a supply chain cybersecurity standard, and as a cybersecurity risk management standard. It could have been better, but it could also have been a lot worse.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] They were joined by CIP-010-2 (now -3) and CIP-003-7 (now -8), which are also risk-based requirements – although neither of them mentions risk, either.

Wednesday, April 13, 2022

Needed: Real-time VEX


The VEX format was developed to solve a very specific problem with SBOMS: Probably over 90% of vulnerabilities identified in components of a software product (i.e. when those components are scanned as standalone software) aren’t exploitable in the product itself.

Of course, this is basically a good thing – after all, if I identify (through the NVD) 100 unpatched vulnerabilities in components of a software product that I use, that means that 90% of them aren’t exploitable in that product. In other words, I don’t need to concern myself with them.

But the problem is that I don’t know up front which of the 100 vulnerabilities I identified are the ones that aren’t exploitable. Absent some notification from the supplier (or perhaps a third party) that certain vulnerabilities aren’t exploitable, I have to treat them all as exploitable. This means I will probably waste a lot of time scanning for (and in some cases patching) vulnerabilities that aren’t there. And since I’m likely to contact the supplier to find out when they’re going to patch all of these vulnerabilities, suppliers are worried that their support staff will be overwhelmed, due to having to respond to emails and calls about non-exploitable vulnerabilities.

Of course, this is why God created VEXes (OK, He didn’t do that. Frankly, I’m not even sure He created SBOMs). The VEX essentially says, “Even though you found in the NVD that CVE-2022-12345 is found in Component X and Component X is found in our product ABC, the vulnerability isn’t in fact exploitable in ABC. So you don’t have to worry about it.” [i]

But here’s a question: After a component vulnerability appears in the NVD (and users of SBOMs learn about it, since they will hopefully be using an automated tool or third-party service that tracks these things), how long will it be before the supplier (or perhaps a third party, either engaged by the supplier or acting independently of them) puts out a VEX stating whether or not it’s exploitable?

And here’s the problem: It’s highly unlikely that a supplier will issue a VEX immediately after they discover that a component vulnerability isn’t exploitable – after all, the average software product has over 100 components, and many products have in the thousands or even tens of thousands; new component vulnerabilities will be popping up all the time in the NVD. Since they have other things to do, it’s likely the supplier (specifically their PSIRT) will only issue VEXes once a week, or even less frequently. This means that their customers may be searching in vain for a serious component vulnerability (and or bothering the help desk about it) for say a couple of weeks before they get the VEX saying it’s not exploitable.

Wouldn’t it be nice if a user could find out about a non-exploitable vulnerability as soon as the supplier knows about it? Yes it would, and that wouldn’t be hard to do. Someone would just have to insert some fairly simple code in some tool used for vulnerability management, which does the following:

1.      Identifies, in the NVD, a serious vulnerability (CVE) in a component of a software product (or intelligent device) that is being used by the organization. Then,

2.      Sends a query to an API running at a URL pre-specified by the supplier of the software or device, inquiring whether this vulnerability is in fact exploitable. In effect, one of these three responses will be quickly received:

a.      “This vulnerability isn’t exploitable in our Product ABC version X.Y.Z.” There should also be a brief textual description of why this is the case, e.g. “Even though one or more modules in this library are included in our product, the vulnerable module isn’t included”[ii]; or

b.      “This vulnerability is exploitable in our Product ABC version X.Y.Z. We recommend you immediately apply the patch available at this location…”; or

c.      “We are currently investigating the exploitability status of this vulnerability in Product ABC version X.Y.Z.” [iii]

Note that the above three statements are just a small subset of the totality of statements that can be made in a single VEX; for an example of a single VEX that would include all of the notifications that a supplier might need to make when a serious vulnerability is patched, see this post. That full document could never be replicated by the simple API above. So what I’m suggesting doesn’t replace VEX documents.

However, it seems to me that this would be fairly easy to implement. The main thing would be to disseminate the API to suppliers interested in distributing SBOMs for their products, since – as I’ve said a number of times – SBOMs almost certainly won’t be widely used until VEX documents are widely available.

The moral of this story: Real-time VEX information will save a lot of users and suppliers time and money. And it will make SBOMs much more usable.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] There may be some high-assurance users who want even non-exploitable vulnerabilities patched; this is because in some unusual circumstances, a vulnerability might become exploitable later. I have also been told by a couple large developers that they have customers who demand that every vulnerability be patched, regardless of whether or not it’s exploitable. They say they need this for compliance purposes. I strongly doubt they’re right about this, but of course it’s not my business to argue with them; it seems those developers are simply taking their word for it. 

[ii] This was a case with a lot of instances of the Log4j library being incorporated into particular products. If the Log4Shell module wasn’t included in the product (which normally the supplier would have to tell you. The NVD doesn’t track that sort of information), the user probably wasted their time if they patched the product. 

[iii]  If the vulnerability is exploitable but the supplier is still working on a patch, they would be well advised not to state that it’s exploitable, but just say it’s under investigation – for reasons that should be obvious.

Friday, April 8, 2022

Who writes the SBOM rules? You do.


Next week, I’ll tape a podcast sponsored by an organization that provides product security services to software and intelligent device suppliers. One of the questions they’ll be asking me is, “How should product security teams prepare themselves for SBOMs and VEXes?” Those teams are often referred to as product security incident response teams, or PSIRTs for short.

This is an interesting question, because I just wrote a post about the need for guidelines for SBOM and VEX use for end users who aren’t software developers. I started the post by saying that plenty had already been written about SBOM use for developers (suppliers), so I wasn’t too worried about them.

However, with respect to use of SBOMs, software or device supplier staff members fall into two groups. The first group consists of the development people, who – as far as I know – understand SBOMs well now and already have practical experience with them. These are who I had in mind when I made the statement in the previous post. I think these people don’t need a lot of help on using SBOMs, since so much has already been written about that, and there are a lot of tools available for them.

But the second group consists of the PSIRT – a team of security people (not developers) who don’t do development, but instead work to make the product as secure as possible. Since the PSIRT’s needs are quite different from those of the developers, they really can’t be expected to have much knowledge of or experience with SBOMs (and certainly not with VEXes).

My first idea for how PSIRTS should prepare for SBOMs was “Pray”. However, while I’ve never denigrated the importance of prayer, I do believe that isn’t all that can be said about the subject. There are multiple things that PSIRTS can do to get ready for the new SBOM world (and keep in mind that federal agencies will be required to request SBOMs from their suppliers, starting in August). Most importantly, PSIRTs need to do what all software users (which is just about every organization, public or private, on the planet. I can’t speak for other planets at the current time) need to do.

I hope to write other posts on the topic of preparing to use SBOMs, but here is probably the most important thing that all software users, including PSIRTs, should do:

Have a little humility. I’ve read and heard statements by various people about SBOMs, that are simply way off the mark. They obviously have never bothered to study up on the subject, but that doesn’t stop them from spouting off as if they understand SBOMs very well.  So here’s a tip: If you think that SBOMs are so obvious that you, the great and powerful Oz, already know everything you need to know about them, I can guarantee you’re wrong – although there are enough other people like you that nobody may call you to account for your statements.

However, I’ll also admit that learning about SBOMs isn’t easy presently. The NTIA Software Component Transparency Initiative – which ran from 2018 through the end of 2021 – published some useful documents, all of which are available here. However, these documents are sometimes contradictory (partly due to the fact that they were written by different groups, sometimes in different years). More importantly, they don’t tell a single coherent story, that will provide someone new to the SBOM topic an in-depth introduction to it (on the NTIA site, there’s a good two-page document titled “SBOM at a Glance”, but it’s just that: a two-page introduction).

There will be a new initiative of some sort under CISA, where Dr. Allan Friedman, who led the NTIA Initiative, moved last summer. However, that hasn’t actually started yet (it hopefully will start in about a month), and more importantly it’s not clear whether it will be anything like the NTIA Initiative. The best thing about the latter was the fact that lots of players from the private sector were able to freely discuss their real-life experiences with SBOMs and shape the content of the documents that were produced.

However, because CISA (and probably most of the federal government) has to operate by much more hands-off rules with respect to the private sector than does the NTIA or the Department of Commerce in general, it's possible that discussion will be much more constrained under CISA. And in any case, because it will probably be 2023 before useful documents are produced by the CISA initiative (which currently doesn’t have a name, at least a publicly announced one), I certainly don’t advise waiting until those documents are available, assuming they appear at all. This is especially important because the deadline for federal agencies to start requiring SBOMs from their software and device suppliers is this August.

However, this might be a good thing, since a lot of people seem to believe that the federal government is “writing the rules” – or even writing regulations – for SBOMs, and that the NTIA documents were a kind of “first draft” of those rules. However, the fact is that the NTIA doesn’t write regulations, laws, guidelines, guidance, edicts, Papal encyclicals, fatwas, Talmudic teachings, or anything like that.

The NTIA facilitates meetings and online discussions in which private industry players work out voluntary guidelines that are necessary for a new technology to be realized in practice. A great example of this is DNS. NTIA didn’t invent DNS, but it provided the forum where telecom providers and the nascent ISP industry, as well as others, could discuss how the DNS idea (which was developed in the academic world) could be made to work in practice. NTIA was the first domain name registrar, and later outsourced that function to the IANA (which now administers DNS with a budget of $100 million. I once asked Dr. Friedman if his budget for the SBOM Initiative was anywhere near that figure, and he surprised me by answering no 😊).

I think we can all agree that DNS works very well. After all, when was the last time that, in order to reach your favorite web site (perhaps this blog?), you had to enter a 32-digit IPv6 address? It’s safe to say that the internet would still be about its size in say 1992, without DNS. Yet, where are the laws or regulations that make DNS work? There aren’t any. Virtually all internet users have implicitly agreed to follow the guidelines developed at the outset under the NTIA’s guidance (but not, to be sure, developed by the NTIA itself!). That’s all it takes.

The same thing will be true for SBOMS. The guidelines that will make SBOMs work on a large scale will be developed by private industry. The documents developed by the NTIA Initiative provided a good start toward those guidelines, but that’s all they are: a good start. A lot more needs to be done before they can be considered finished. My personal opinion is that private industry needs to pick up the ball and run with it from here on.

Fortunately, that is beginning to happen. As I discussed in my previous post, Fortress Information Security has produced an excellent document[i] that goes at least part of the way toward providing the end-to-end narrative of utilizing SBOMs that is required for SBOMs to become widely used (although I wish it were the only thing that’s required!).

Other documents will surely follow, from Fortress, from other private sector entities, and hopefully from CISA. I also hope that one or more forums will arise in the private sector, where various “players” in the “SBOM marketplace” will exchange ideas on how to remove the various obstacles that are currently inhibiting widespread adoption of SBOMs – just one of which is lack of comprehensive documentation for how SBOMs can be used to reduce an organization’s level of supply chain cybersecurity risk.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] I have a consulting relationship with Fortress, and contributed some ideas to that document, although I wasn’t one of the authors.

Monday, April 4, 2022

Finally, guidelines for using SBOMs!


It’s no exaggeration to say that almost the entire focus of the SBOM literature so far has been on production of SBOMs: that is, describing what software suppliers need to know in order to produce usable SBOMs, including of course a lot of discussion of formats.

This is in retrospect not too surprising, since when the NTIA Software Component Transparency Initiative started in 2018, there wasn’t much awareness of SBOMs among either software suppliers or software users. Today, in large part due to the good work of the Initiative (which ended last year) and the documents they produced, suppliers are both aware of and using SBOMs; that trend will continue.

However, suppliers are producing SBOMs almost exclusively for their own use. That is, they’re producing them so they can be aware of cybersecurity risks to the products they develop, which are due to the components they include in their products. The supplier needs to learn about the level of risk posed by each component in the product in order to determine, for example, whether a component has any serious unpatched vulnerabilities that warrant removing the component and replacing it with a less risky one. Having SBOMs for their products is important for a supplier in achieving that goal.

Of course, it’s a good thing that suppliers are making use of their own SBOMs. It means that the software we all use is becoming less risky. But SBOMs have been touted all along (including in this blog, to be sure) as a way for end user organizations to learn about the risks posed by components in the software they use. To do that, those organizations (i.e., any organization, private sector or governmental, that uses software) need to receive SBOMs from their software (and intelligent device) suppliers. Except in rare cases, that simply isn’t happening now.

Why aren’t the users receiving SBOMs from their suppliers? As I just said, it’s likely that most larger suppliers have SBOMs for their products now; and if they don’t, it’s almost trivially easy for them to start producing them. In fact, given the advantages of having SBOMs for their own use now, you have to wonder about any supplier who tells you they don’t have them today.

The reason why suppliers aren’t distributing SBOMs is that their customers aren’t asking for them. So why aren’t the customers asking for them? Now we’re getting to the heart of the problem. There are two important things that users need, which they don’t have now.

The first thing users lack is one I’ve mentioned before: low-cost or open source automated tools or third-party services, that will ingest SBOMs (and VEX documents) provided by software suppliers. Those tools or services will then produce for the user organization a list of exploitable vulnerabilities that are due to components included in the software they employ to run their business. There is some progress being made addressing this problem, but I’ll admit it’s coming slowly – far too slowly for a lot of our tastes.

But the second thing is just as important as the first: users need documentation that provides a coherent narrative (at a high level, of course) of all the steps they need to take in order to use SBOMs and VEX documents to mitigate their software supply chain cyber risks. More importantly, the documentation needs to be written for a software user that doesn’t already have a good understanding of software development and SBOMs.

This latter point is especially important. During its three years of existence, the NTIA Initiative produced good documents about SBOMs, some of which touched on how they should be used. But all of those were written from the point of view of software developers (suppliers). And they mostly use language that developers understand, which is foreign to a lot of end users. This isn’t surprising, since the great majority of participants in the meetings were either developers or consultants to developers.

This is why I’m quite pleased to announce that there’s now a document available that provides a comprehensive narrative of what an end user organization needs to do to obtain SBOMs and VEX documents and use them for the purpose of reducing the cybersecurity risks their organization faces – specifically risks due to components found in the software they utilize to achieve their mission. Fortress Information Security[i] has just released a white paper titled “Software Bills of Materials Consumer Use Cases”. It’s available here.[ii]

Two caveats about this paper:

1.       While the paper provides a good picture of how SBOMs can be obtained and used now, this is still early in the SBOM “game” and there are bound to be lots of changes over the coming few years. I hope Fortress will update the document regularly.

2.       Even based just on what’s understood today, there is undoubtedly a lot more that could be added to this paper – for example, regarding VEX documents. So there might well be further white papers addressing specific topics on the consumption spectrum.

I hope you’ll enjoy the paper! If you have comments or questions about it, please drop them to tturner@fortressinfosec.com – and if you don’t mind copying me, I’d love to see them as well.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] Full disclosure: I have been consulting with Fortress for over two years. I contributed some ideas for the paper, but I wasn’t one of the authors. 

[ii] You’ll have to fill out a short form before downloading the white paper. A small price to pay!

Friday, April 1, 2022

Our worst fears confirmed: Covid-over-IP!


Yesterday, March 31, an extraordinary news conference was held in Washington, DC. It was a joint effort by the CDC and CISA, marking the first time they have addressed a single threat together. During the conference, CDC Director Rochelle Walensky and CISA Director Jen Easterly told an amazed audience of reporters that a previously unthinkable event had occurred: A new Covid-19 variant has appeared in a completely virtual form. That’s right, Covid can now spread over IP!

As Dr. Walensky recounted it, the CDC started getting scattered reports in January that multiple people who had participated in the same videoconference a few days earlier had all come down with Covid-19 on the same day – even though none of them had never been in the physical presence of any of the others in the group. Dr. Walensky said her first reaction when hearing this story was “So what? How could that be anything more than a strange coincidence?” Given that the omicron variant was now spreading very rapidly in the US, it was possible that almost any random collection of people might all develop Covid on the same day.

However, the frequency of these reports seemed to be increasing, so both agencies decided an investigation was needed. A little inquiry led them to the world’s leading expert on disease spread between the physical and cyber realms, Dr. Justin Case. Dr. Case is Chair of the Department of Cyber Epidemiology of the University of Southern North Dakota at Hoople, ND (Extension Division). He holds the same position at the University of Northern South Dakota at Wanton, SD (ED). Dr. Case, who participated in the conference virtually from Hoople, took up the narrative from there.

“Let me say to begin with that a physical disease crossing into the cyber realm isn’t a new phenomenon. After all, we’re all spending more and more of our lives online (especially during the pandemic) and it’s becoming harder and harder to separate our physical and virtual existences. As we all know, the novel coronavirus is constantly mutating. Each of those mutations can be thought of as a new business venture: the virus takes a new form in the hope that this new form will open up a new ‘market’ for it.

“You may know that Covid-19 has spread to other species before, like minks and deer; these were clearly new markets that paid off for the virus. When you consider that, it shouldn’t be hard to understand that the virus might want to test the virtual market, especially since people with Covid – like just about everyone else on the planet – spend increasing amounts of time in front of a computer or smartphone screen. In fact, it was just about inevitable that, at some point, the virus would figure out how it could get to those fresh-faced, ripe targets that seemed to be inches away from its current host, right behind the computer screen.

“We’re still trying to find Patient Zero – that is, the first person who contracted the new virtual Covid-19 variant. Normally, that’s not too hard to do, since as you go back in time the cases are confined to a smaller and smaller geographic area. However, when you’re dealing with a disease that’s crossed into the virtual realm, that’s a lot harder. You can’t rely on physical proximity as the critical factor anymore; you have to find its cyber equivalent.

“We first looked at domain names, on the idea that the people using the same “home” internet domain would be more likely to spread Covid-over-IP among themselves than people in different domains. This made sense, since most web conferences are conducted for business purposes, and often among employees of the same company. That is, they’ll all be joining the meeting from the same ‘companyxyz.com’ domain. That might be how the virus spreads among them.

“However, that hypothesis turned out to be wrong. We found too many examples of web conference participants contracting Covid-19 together, while using completely different domain names.

“At that point, we started focusing on IP addresses. Those are a lot harder to track, since most companies take great pains not to make the IP addresses of their internal network visible on the public internet. So we had to work with the IT staff of the company that employs each sick worker, to find out the IP address they were actually using during the web conference in question.

“To make a long story short, we’re now working from the hypothesis that two individuals who participate in the same videoconference from IP addresses that are numerically very close to each other are at much higher risk of spreading Covid-over-IP than two people who are using very different addresses. However, if it’s true that IP addresses are the key indicator of proximity, this means that Covid could spread through lots of different online activities.

“This means we need to be concerned about a lot more activities than just web conferences. We’ve examined the possibility of spreading Covid-19 through a number of online activities. We’ve identified four activities of concern.

“First, people who are unvaccinated, over 65, immunocompromised, or otherwise at higher-than-normal risk from Covid-19 are well advised not to use a computer at all. If that’s simply not possible, they should at least make sure they’re vaccinated and boosted, and that they wear a high-quality N95 or KN95 mask while online.

“Second, people participating in a videoconference should share their IP addresses in the chat session. If two people have close IP addresses, one of them would be well advised to exit the conference – or at least turn their camera off (there’s some evidence that Covid-over-IP spreads more easily to someone whose face appears onscreen than to someone who has their camera off, although we’re still investigating that).

“Third, you need to be careful about email. While this is even more preliminary, we have had more than a handful of reports in which a person seems to have spread Covid to someone that they sent an email to – and this can happen even if they’re using very dissimilar IP addresses.

“If you remember, in the early days of the pandemic, there was a lot of concern about Covid-19 spreading on surfaces (do you remember wiping down your groceries?). It seems virtual Covid-19 may do something similar with email: It attaches itself to the subject line of an email, so that when you open the email (or sometimes if you just move the mouse over it), the virus can immediately jump to you. I’ll admit this one is pretty speculative, so at the moment I’m certainly not suggesting that you stop reading email. But you might want to cut down on the number of emails you read (of course, never even open spam or phishing emails), and spend as little time as possible reading each email.”

“Lastly, you need to be careful who you friend on Facebook and other social media platforms. Before friending anybody, you should inquire about their vaccination status; if they’re not vaccinated and boosted you should request that they do this before you friend them. But as we know, being vaccinated doesn’t mean a social media friend can’t become infected. And because anyone might be asymptomatically infected, you need to be careful to limit your interactions with all of your online friends, not just ones who might not be vaccinated. Plus you should wear a mask whenever interacting on any social media, at least for the time being.

Dr. Case closed by saying that he is writing an article on Covid-over-IP, which will appear soon in the Journal of Irreproducible Results. The article will contain important advice on avoiding online Covid infections as well as on reweaving rugs, Dr. Case’s hobby.

Any opinions expressed in this blog post are strictly mine and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.