Saturday, August 9, 2025

CISA affirms they support the CVE Program. Is that good or bad news?

Note from Tom: As of August 11, my new posts will only be available to paid subscribers on Substack. Subscriptions cost $30 per year (or $5 per month); anyone who can’t afford to pay that should email me, since I want everyone to be able to read the posts. To have uninterrupted access to my new posts, please open a paid Substack subscription or upgrade your free Substack subscription to a paid one.

Last Thursday, at the Black Hat conference in Las Vegas, two CISA officials committed to “supporting the MITRE-backed Common Vulnerabilities and Exposures Program, just months after it faced a near complete lapse in funding” (quoting from Nextgov/FCW). Given that someone at CISA almost cut off funding for the program in April (although others tried – very unconvincingly - to deny this was anything more than an administrative glitch), it was good to hear this.

A MITRE[i] official set off this firestorm with his letter to the CVE Board members on April 15. The letter stated that the contract wasn’t going to be renewed and the program would be cancelled. However, this was followed shortly afterwards by an announcement that a group of CVE Board members (and others) were already putting together the framework (and funding) for a privately-run nonprofit organization called the CVE Foundation. Over the next few weeks, the group proceeded to fill in many of the details of their story (this effort had been ongoing for a few months, but it hadn’t been announced previously. Of course, this was because at the timer there didn’t seem to be any need to rush the announcement.

The Foundation is an international effort, which already – from what I hear – has more than enough funding promised for them to take over the MITRE contract when it comes up for renewal next March (the funding will come from both private and government sources, although I’m guessing that the US government isn’t currently supporting it). However, they intend to be much more than an “In case of emergency, break glass” option if CISA doesn’t renew the contract (which I still think is very likely, no matter what the two gentlemen – neither of whom has been at CISA very long – said at Black Hat).

The CVE Foundation was founded (and is led) by a few CVE Board members who have been involved with the CVE Program since its early days. Since then, they have been part of the numerous discussions about how the program can be improved (the Foundation is now led by Pete Allor, former Director of Product Security for Red Hat. Pete has been very involved with the CVE Program since 1999. He is an active Board member).

While the CVE Program, in my opinion, has done an exceptional job and continues to do so, the fact is that government-run programs almost without exception are hampered by the constraints imposed by the same bureaucracy that often makes government agencies a stable, not-terribly-challenging place to work. That is, they don’t exactly welcome new, innovative ideas and they make it hard to get anything done in what most of us consider a reasonable amount of time.

This week, one well-regarded person who has worked with the CVE Program for 10-15 years and is a longtime Board member, wrote on an email thread for one of the CVE working groups that he was happy to be part of the CVE Foundation from now on. He wrote that, while he enjoyed working with the CVE program, “…we measure progress in months and years instead of weeks.” Like others, he has many ideas for improvements that can be made to the program, but hasn’t seen it make much progress  in implementing them so far. I’m sure he’s quite happy to have the chance to have a serious discussion about these and other changes, assuming the CVE Foundation is placed in charge of the CVE Program.

However, if CISA somehow remains in control of the CVE Program (i.e., the contract remains with them), it will be a very different picture. I don’t think CISA ever had a big role in the operation of the program (beyond having one or two people on the CVE Board and of course paying MITRE under their contract). Moreover, CISA is unlikely to take a big role if it remains as the funder of the program.

If CISA retains control of the contract, MITRE will remain in day-to-day charge of the program. As I said, I think MITRE has done a good job so far, but like any government contractor, they must adhere strictly to the terms of their contract. If someone comes up with a great new idea that requires more money, or even just re-deploying people from what they’re doing now, the only thing that can be done is put it on the to-do list for the next contract negotiation.

My guess is that, when MITRE’s contract comes up for negotiation next year, the CVE Foundation will take it over from CISA; it’s hard to imagine that, given the huge personnel cuts that are being executed now in the agency, there will be a big effort to retain control of a contract that costs CISA around $47 million a year.

There’s also no question that the CVE Foundation will write their own contract with MITRE. It will require MITRE staff members to do the day-to-day work of the CVE Program, but it will give the Foundation a big role in determining its priorities. Frankly, I think the MITRE people – who are all quite smart, at least the ones I’ve worked with – will be just as happy as anyone else to see the program achieve more of its potential than it does now.

I also think the CVE Foundation will try to resolve some serious problems with the current CVE Program. Doing that has been put off so far, because the problems are very difficult to fix. For example, up until about ten years ago, MITRE created all new CVE records. That meant that CVE Records were fairly consistent, but as the number of new records increased every year, MITRE simply couldn’t keep up with the new workload.

At that point, the CVE Program moved to a “federated” approach, in which CVE Numbering Authorities (CNAs) were appointed. These included some of the largest software developers, who reported vulnerabilities in their own software as well as vulnerabilities in the products of other developers (in their “scope”. Today, there are 463 CNAs of many types (including GitHub, ENISA, JP-CERT and the Linux Foundation).

Of course, it’s good that so many organizations have volunteered to become CNAs; the problem is that this has led to huge inconsistencies in CVE records. For example, a lot of CNAs don’t include CVSS scores or CPE names in the new records they create[ii]; the CVE Program (i.e., MITRE staff members) has been reluctant to press them to do this. If CISA had made this problem a priority, they could have addressed it during contract negotiations with MITRE.

So, I see good things ahead for the CVE Program. However, that requires moving MITRE’s contract from CISA to the CVE Foundation next March. I confess I don’t want this to happen next March; I want it to happen tomorrow.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com, or even better, sign up as a free subscriber to this blog’s Substack community chat and make your comment there.


[i] MITRE is a nonprofit Federally Funded R&D Corporation (FFRDC) that has operated the CVE program on behalf of DHS since its inception in 1999 (CISA came into being six years ago). The idea for CVE came from MITRE researchers.

[ii] Many CNAs will tell you that the National Vulnerability Database (NVD) had longstanding policies that they would create CVSS scores and CPE names, and add them to the record; in fact, if the CNA created either of these items, the NVD would discard what the CNA created and substitute their own. Fortunately, the NVD now has a new leader. Hopefully, that will lead to a lot of change there; it’s sorely needed.

Friday, August 8, 2025

One of many good reasons to fix the cloud problem in NERC CIP


Note from Tom: As of August 11, all but a few of my new posts will only be available on Substack to paid subscribers. Subscriptions cost $30 per year (or $5 per month); anyone who can’t afford to pay that should email me, since I want everyone to be able to read the posts. To have uninterrupted access to my new posts, please open a paid Substack subscription or upgrade your free Substack subscription to a paid one. 

On Wednesday evening, Microsoft and CISA announced a “high-severity vulnerability” that affects on-premises versions of Exchange. The vulnerability also affects the Entra cloud-based authentication system.

I won’t discuss the details of the vulnerability, since they’re not important for this post. What is important is the fact that this high-severity vulnerability only affects the on-premises version of Exchange, not the cloud version (Exchange Online). Of course, since it’s on-premises, users have to a) see the patch availability notification, b) locate and download the patch, and c) apply the patch, to fix the vulnerability. None of these steps are hard, but since human beings miss emails or forget to follow up on them, leave on vacation without performing all 1,963 items on their to-do list, etc., it’s certain that some users won’t have the patch applied even a year from now.

This is a reminder of one of the biggest reasons for using the cloud (especially SaaS applications in the cloud): The CSP just needs to apply a patch once, for all their users to be protected. The users don’t necessarily need to be told about the patch, although they should be informed for peace of mind.

Of course, this is one of many reasons why it’s important that the “Cloud CIP” problem be solved as soon as possible, so that full use of the cloud will be possible for NERC entities with medium and high impact CIP environments. Fortunately, I think the solution is right around the corner in…2031.

What, you say it’s unacceptable that we need to wait so long for the solution? If it will make you feel better, I’ll point out that it’s possible that 1) the current Standards Drafting Team will produce their first draft of the new standards sometime next year, 2) that it will take just a year for the standards to be debated and balloted at least four times by the NERC ballot body (I believe this has historically been the minimum number of ballots required to pass any major change to the CIP standards), 3) that it will be approved in six months by FERC, and 4) that the ballot body will agree to a one-year implementation period.

In all of these things come to pass, and with a helping of good luck, the new and/or revised CIP standards will be in place in mid-2029; you might think even that is slow, but I can assure you it’s lightning-fast by NERC standards; it took five and a half years for the last major change to CIP – CIP version 5 – to go through these same steps. To be honest, I consider the above to be a wildly over-optimistic scenario. In fact, I think that, if the required processes are all followed, even the 2031 target may be over-optimistic.

What can be done to shorten this time period? There is an “In case of emergency, break glass” provision in the NERC Rules of Procedure that might be used to speed up the whole process. However, it would require a well-thought-out plan of action that will need to be approved by the NERC Board of Trustees. I doubt they’re even thinking about this now.

The important thing to remember here is that there are some influential NERC entities that not only swear they will never use the cloud (on either their IT or OT sides), but they also are opposed to use of the cloud by any NERC entity – even though they know they won’t be required to use the cloud themselves.

Another thing to remember: Unlike almost any other change in the CIP standards, FERC didn’t order this one. This means they might take a long time to approve the new standards (I believe it took FERC at least a year and a half to approve CIP version 1); it also means they might order a number of changes. These changes would be included in version 2 of the “Cloud CIP” standards, which would appear 2-3 years after approval of the version 1 standards. FERC could also remand the v1 standards and send NERC back to the drawing board. However, since one or two FERC staff members are closely monitoring the standards development process, that is unlikely.

The danger is that, if the standards development process is rushed and the standards are watered down to get the required supermajority approval by the NERC ballot body, what comes out in the end won’t address the real risks posed by use of the cloud by medium and high impact CIP environments. In fact, this is what happened with CIP-013-1: It didn’t address most of the major supply chain security risks for critical infrastructure. The fault in that case was FERC’s, since they gave NERC only one year to draft and approve the new standard - which was one of the first supply chain security standards outside of the military.

This is why FERC put out a new Notice of Proposed Rulemaking (NOPR) last fall. Essentially, it said, “We admit we should never have approved CIP-013-1 mostly as is. Now we intend to rectify that error.” The NOPR suggested a few changes, but its main purpose was to request suggestions for improving the standard by early December 2024. I thought that, once that deadline had passed, FERC would quickly come out with a new NOPR – or even an Order – that laid out what changes they want to see in CIP-013-3 (CIP-013-2 is the current version, although its only changes were adding EACMS and PACS to the scope of CIP-013-1). However, as my sixth grade teacher often said, “You thought wrong.” There’s been nary a peep from FERC on this topic since December. In my opinion, a revised CIP-013 is still very much needed.

So, I hope the current SDT doesn’t feel rushed to put out a first draft of the new or revised standard(s) they’re going to propose. Just like for on-premises systems, there are big risks for systems deployed in the cloud – and few of them are the same as risks that apply to on-premises systems. It’s those cloud-only risks that need to be addressed in the new standards. There’s more to be said about this topic, coming soon to a blog near you. 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com, or even better, sign up as a free subscriber to this blog’s Substack community chat and make your comment there.

Wednesday, August 6, 2025

AI is already powering half the US economy. And that’s only half the story.

 

Note from Tom: Since 2013, I’ve been publishing “Tom Alrich’s blog” on Blogspot. I’m now publishing my posts in this Substack blog, named “Tom Alrich’s blog, too”. I’m posting for free on Substack now, but after August 11, new posts on Substack will only be available to paid subscribers. A subscription to this blog costs $30 per year (or $5 per month); anyone who can’t pay that should email me. To have uninterrupted access to my new posts, please open a paid account on Substack, or upgrade your free account to a paid one. There are lots of good posts to come!

 

My latest post, which was based mostly on last Saturday’s column by Greg Ip of the Wall Street Journal, described three negative societal impacts of the massive AI buildout that is going on:

1.      Investment in other tech areas besides AI is being squeezed because of the huge amounts that companies like Microsoft and Meta are spending on the AI rollout (Microsoft alone is likely to spend $80 Bn this year, mostly on new data centers. I’ve been told they’re opening a new data center almost every day).

2.      The huge amounts of cash being spent on the AI buildout are starting to raise interest rates. Given the miniscule revenues that are now coming in to the big AI players, they need to borrow a lot of the money to finance the buildout, whether from a bank, the bond market or just other revenue streams (e.g., I’m sure revenue from Facebook finances at least some of Meta’s AI buildout). If anything, this trend will accelerate; for example, Microsoft is likely to spend over $100 billion on AI next year.

What’s the third negative societal impact? While Greg didn’t mention this in his column, I wrote in a blog post last year that the huge power needs of AI data centers are causing more and more electric utilities to postpone retirement of coal plants. Of course, this will damage our (i.e., humanity’s) ability to combat climate change.

However, I noted at the end of my latest post that my next post would talk about the benefits of AI. That goal has been aided by two new newspaper articles, one in the Wall Street Journal (this time not by Greg Ip) and the other in the Washington Post. Both articles discuss huge economic benefits that are accruing to the US today, due to the current AI boom.

The fact that these are accruing today is important, since Greg Ip’s column had spoken of AI’s benefits as coming far in the future. This isn’t a contradiction, because Greg discusses capital markets; his big concern in this article is whether the stock market is justified in its apparent belief that the huge AI buildouts will return concomitant benefits in a reasonable time frame (say, 5-10 years). He is clearly skeptical that this will happen; he thinks the full benefits to the companies doing the buildouts won’t arrive for 10-15 years.

On the other hand, both the WaPo article and the new WSJ article point out that just about half of the growth projected for the US economy this year will be due to the AI buildout, since most of that money stays in the US. For example, lots of people are employed in that buildout (at decent wages, hopefully); those people eat at restaurants, buy clothes for their kids, buy new TVs, etc. I don’t know how often in the past a single industry has accounted for half of GDP - other than in World War II, when I’m sure the military was the dominant industry (for example, a lot of factories that made cars, planes, etc. were converted to wartime production).

Of course, a lot of the chips, motherboards, and pieces of furniture those people are installing are manufactured overseas. Will these expenditures result in an overstatement of the GDP benefits of the buildout? No. In fact, the result will be just the opposite. Imports are subtracted from GDP. This means that what’s being spent on domestic labor and products in the AI rollout will be equal to or larger than the sum of the cost of the imported products (but with a positive sign) and the yearly increase in all other domestic activity that falls under GDP. This makes the fact that the AI buildout will account for half of GDP even more impressive.

To quote the article,

“The AI complex seems to be carrying the economy on its back now,” said Callie Cox, a market strategist with investment firm Ritholtz Wealth Management. “In a healthy economy, consumers and businesses from all backgrounds and industries should be participating meaningfully. That’s not the case right now.”

“AI executives argue the spending boom will create more jobs and bring about scientific breakthroughs with advancements in the technology. OpenAI has said that once its AI data centers are built, the resulting economic boom will create “hundreds of thousands of American jobs.”[i]

The WSJ becomes Mr. Softee

The Wall Street Journal usually focuses on hard numbers that can be easily verified – closing stock prices, trade statistics, etc. True to form, this WSJ article starts by focusing on a hard economic number: productivity. This is defined as the rate of output per unit of input – that is, the amount by which output varies from one period to another if changes in the “factors of production”, usually grouped into labor and capital, are accounted for.

For example, suppose a plant has 100 workers in period 1 and 200 in period 2. The plant also has $1,000 of capital (machinery, buildings, cash on hand, etc.) in period 1, which increases to $2,000 in period 2. If output increases from 300 widgets in 1 to 600 in 2, that means both inputs and output have doubled; thus, the ratio of quantity output to quantity of input doesn’t change. Thus, productivity stays the same.

On the other hand, if the inputs doubled, output only increased from 300 to 450, this means productivity fell, since the same inputs produced a lower output. Of course, this isn’t a good thing. Conversely, if inputs doubled but output increased from 300 widgets to 750, this means output more than doubled and productivity increased, which is a good thing. There is thus more money for raises for workers and bonuses for management, as well as for investment.

When you look at an entire economy, productivity needs to grow at a certain amount every year, just to keep up with growth of the population. Let’s assume population grows at 2% per year. This means that productivity will also need to grow at 2%, just to allow the population to maintain their current standard of living. If productivity grows at more than 2%, the standard of living can increase. Conversely, if it grows at less than 2%, the standard of living will decrease, unless the government increases its borrowing to maintain living standards. But as the US is learning now, there are limits to the borrowing strategy.

The best way to increase productivity in the short term is to grow the amount and/or quality of capital that is used for production (it takes much longer to “grow” workers). For example, if productive capital grows by 10% but the labor force only grows by 2%, then output per worker will grow enough that the standard of living can increase.

But the increased capital needs to be the kind that will allow more output to be produced. For example, suppose there are two types of capital: Type A machines that produce clothes and food, and Type B machines that produce pencils. Obviously, if the entire capital investment is in B machines, the increase in output will consist entirely of pencils; meanwhile, the workers will all be naked and starve to death.

As Greg IP pointed out, the AI buildout isn’t designed to raise economic output much in the near term; therefore, it’s much more like Type B investment than Type A. What keeps valuations of the AI companies high is that it’s well known there will be a huge increase in economic output (due to productivity gains brought on by AI) at some point in the future – but that point is currently not known. Therefore, traditional economic analysis, which assumes that productivity is the key to prosperity, finds the AI buildout to be a colossal waste.

However, the authors of the second WSJ article point out that there’s another economic measure that paints a completely different picture of the AI buildout. This measure can’t be quantified exactly but can be estimated through surveys. It’s called “consumer surplus”; it’s the difference between the price a consumer would be willing to pay for a product or service and its actual price. Of course, this quantity varies by the consumer, the product, and even the time of day, so it can never be directly measured. However, the authors (both academics) have conducted surveys that allow them to estimate the consumer surplus from AI products at $97 billion (here, “consumers” means individuals and organizations).

Of course, AI products today are mostly free, or at least free enhancements to existing for-charge products (e.g., Microsoft’s CoPilot add-on to its Office 365 suite). The authors point out that free AI products are almost never included in GDP, which is based almost entirely on sales data. However, they definitely produce benefits for consumers, just like for-pay products do:

“When a consumer takes advantage of a free-tier chatbot or image generator, no market transaction occurs, so the benefits that users derive—saving an hour drafting a brief, automating a birthday-party invitation, tutoring a child in algebra—don’t get tallied. That mismeasurement grows when people replace a costly service like stock photos with a free alternative like Bing Image Creator or Google’s ImageFX.”

In other words, the consumer surplus can be considered a quantity that should be maximized just like GDP should be maximized, even though it will probably never be possible to include it in GDP. They describe how they arrived at the $97 billion estimate in this passage:

“Rather than asking what people pay for a good, we ask what they would need to be paid to give it up. In late 2024, a nationally representative survey of U.S. adults revealed that 40% were regular users of generative AI. Our own survey found that their average valuation to forgo these tools for one month is $98. Multiply that by 82 million users and 12 months, and the $97 billion surplus surfaces.”

They continue,

“William Nordhaus calculated that, in the 20th century, 97% of welfare gains from major innovations accrued to consumers, not firms. Our early AI estimates fit that pattern. While the consumer benefits are already piling up, we believe that measured GDP and productivity will improve as well. History shows that once complementary infrastructure matures, the numbers climb.

Tyler Cowen forecasts a 0.5% annual boost to U.S. productivity, while a report by the National Academies puts the figure at more than 1% and Goldman Sachs at 1.5%. Even if the skeptics prove right and the officially measured GDP gains top out under 1%, we would be wrong to call AI a disappointment. Life may improve far faster than the spreadsheets imply, especially for lower-income households, which gain most, relative to their baseline earnings, from free tools.”

To paraphrase these two paragraphs, the authors estimate there will eventually be a big boost in GDP due to AI use, even though today the boost is mostly outside of GDP. Of course, they are talking about an increase in GDP due to use of AI, whereas the earlier estimate that half of GDP growth this year will be due to AI is referring to the massive spending for infrastructure rollout going on now.

In other words, AI will produce two big boosts to GDP: due to the rollout (starting this year, but certainly not ending anytime soon) and due to the productivity gains caused by widespread use of AI products. The latter gains can’t be measured today, but they will in the future.

The authors conclude,

“As more digital goods become available free, measuring benefits as well as costs will become increasingly important. The absence of evidence in GDP isn’t evidence of absence in real life. AI’s value proposition already sits in millions of browser tabs and smartphone keyboards. Our statistical mirrors haven’t caught the reflection. The productivity revolution is brewing beneath the surface, but the welfare revolution is already on tap.”

 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com, or even better, sign up as a free subscriber to the Substack community chat for my subscribers and make your comment there.


[i] The WaPo article points out that a large portion of the growth due to AI is simply Nvidia’s profits. But it is certainly not the lion’s share of that growth.

Monday, August 4, 2025

Does AI’s cost outweigh its benefits? – Part I


Note from Tom: I’ve started a new blog on Substack called “Tom Alrich’s blog, too”. After August 11, Substack will be the only source for my new posts, where they will only be available to paid subscribers. Before that date, new posts will be available for free on Substack and Blogspot. A subscription to the Substack blog costs $30 per year (or $5 per month); anyone who can’t pay that should email me. To have uninterrupted access to my new posts, please open a paid account on Substack or upgrade your free account to a paid one.

Last October, I wrote a post that described an incredibly powerful presentation at the just-concluded annual NERC GridSecCon conference in Minneapolis. That presentation was by a meteorologist named Sunny Wescott, who works for CISA. You can read that post; I’ll just point out that she showed how far we’ve already traveled on the road to climate disaster. However, she also showed there’s a lot of activity today that can still save us, since many people are working in many innovative ways to rectify the situation.

The point of my post was that the humongous amount of power needed to train large language models is now working against combating climate change – or more correctly, AI’s power requirements are on net accelerating climate change. This might be surprising to you, since the large AI processing players like Microsoft, AWS, Meta, and Google have until recently loudly touted their commitment to renewable energy. They have backed their words with commitments for energy from wind and solar farms, and even nuclear plants.

Unfortunately, there are only so many large renewable energy sources to go around (plus recent political developments are, if anything, working to decrease future availability for all renewables sources except nuclear power[i]). In the above-linked post, I also mentioned that, at the conference, a friend who works for a large electric utility pointed out that a 1,000MW coal plant they used to own in North Dakota (and which I had visited with him 7 or 8 years ago) had just been purchased to power a huge greenfield data center nearby.

Some other large coal plants in the US have also recently been purchased, or at least removed from the list of plants to be decommissioned. I wrote in the post that it was likely that most US coal plants that aren’t already far down the path to being decommissioned will be given a similar new lease on life. And other countries like Saudi Arabia and India, which already have many large coal plants, are almost certain to use those plants to attract AI investment, also setting back their climate change efforts.

Thus, the drive for AI has already started to set back North America’s efforts to delay climate change. However, it turns out there’s something just as important that AI is also setting back: the effort to set the US on a low-inflation and low-interest rate path to prosperity.

The inspiration for this statement was someone else who I greatly admire, although he’s in a completely different field from Sunny Wescott. This is Greg Ip, who writes about capital markets for the Wall Street Journal and is IMHO the best economist of the many that work for WSJ. I’ve never heard anybody rave about him like I’ve heard raves about Sunny, but I’ve always been impressed by his ability to analyze well known facts and come to important conclusions that have otherwise been overlooked.

Greg started his column in the August 2 edition of the Journal (the link is here, but it’s behind a paywall. I can send you a PDF if you email me) with these three paragraphs that precisely summarize what he said:

In the past two weeks one big tech company after another reported blowout earnings amid a wholesale embrace of artificial intelligence.

Look a little closer, and a more unsettling side to the AI boom emerges. All the spending on chips, data centers and other AI infrastructure is draining American corporations of cash.

This underscores the hidden risks from the AI boom. No one doubts its potential to raise growth and productivity in the long run. But financing that boom is straining the companies and capital markets.

He continues:

 Since the first quarter of 2023, investment in information processing equipment has expanded 23%, after inflation, while total gross domestic product has expanded just 6%. In the first half of the year (2025), information processing investment contributed more than half the sluggish 1.2% overall growth rate. In effect, AI spending propped up the economy while consumer spending stagnated. 

Much of that investment consists of the graphics-processing units, memory chips, servers, and networking gear to train and run the large language models at the heart of the boom. And all that computing power needs buildings, land and power generation.

He goes on to explain that big tech companies used to be considered “asset-light”. That is, most of the investment they made was in relatively low-cost intangible assets like intellectual property and software. Therefore, the boatloads of cash that they brought in mostly went to the bottom line, making them tremendously profitable.

However, the same companies (he cites Alphabet, Amazon, Meta and Microsoft), even though their established businesses are still bringing in lots of cash, are investing huge amounts in their AI infrastructure; this significantly lowers their profitability. Meanwhile, two fast-growing AI companies that don’t have established businesses, Open AI and Anthropic, are both losing money.

However, I don’t recommend going to Sam Altman and offering him $1 for Open AI, while agreeing to assume all their debt – that strategy isn’t likely to succeed. In a section ominously titled “Dot-com echoes”, Greg points out that investors are valuing all these newly asset-heavy companies as if the investments they are making in AI are as likely to be profitable as the investments they made in the good old asset-light days – when every investment in say a new version of Windows was almost certain to bring in tons of cash (does anyone else remember the huge hype over the rollout of Windows 95? That hype was money well spent, since Microsoft made a lot of money on that version of Windows – despite the fact that Windows 95 was and is a security nightmare).

Greg continues,

For now, investors are pricing big tech as if their asset-heavy business will be as profitable as their asset-light models. 

So far, “we don’t have any evidence of that,” said Jason Thomas, head of research at Carlyle Group. “The variable people miss out on is the time horizon. All this capital spending may prove productive beyond their wildest dreams, but beyond the relevant time horizon for their shareholders,” he added.

In the late 1990s and early 2000s, the nascent internet boom had investors throwing cash at startup web companies and broadband telecommunications carriers. They were right (that) the internet would drive a productivity boom, but wrong about the financial payoff. Many of those companies couldn’t earn enough to cover their expenses and went bust. In broadband, excess capacity caused pricing to plunge. The resulting slump in capital spending helped cause a mild recession in 2001.

Greg adds that he’s not expecting a stock market crash like the dot-com bust of early 2000 or the (mild) dot-com recession of March to November 2001 (although the September 11 attacks clearly made that slump less mild than it otherwise would have been). His point in the first part of his column is summarized in one of the sentences quoted above: “All the spending on chips, data centers and other AI infrastructure is draining American corporations of cash.”

In other words, I believe Greg is saying that, just like AI is straining the electric power industry, it’s putting much more strain on the companies that are supposedly benefiting the most from the AI boom: the companies developing the software and running the huge data centers that train the models. On the other hand, I’m not suggesting anybody start a tag day for Microsoft or Google, since their investment will inevitably pay off; the question is when.

Shareholders in companies making the big AI investments are likely to be disappointed if the economic returns show up ten years later than anticipated. In fact, this is almost exactly what happened with the huge increase in office productivity that was anticipated after the IBM PC was introduced in 1981. That increase didn’t happen until the 1990s, in part because that’s when networking technology became powerful and cheap enough that all those standalone PCs on desktops could now work together to share databases, internet connections, etc. (just ask Larry Ellison, founder of Oracle). Until that happened, there wasn’t much office productivity gain.

The last section of Greg’s article is called “The interest-rate effect”. This refers to another effect of the huge cash demands of AI investment. Greg introduces this section by saying,

After the global financial crisis of 2007-09, big tech was both a beneficiary of low interest rates, and a cause.

Between that crisis and Covid-19, these companies were generating five to eight times as much cash from operations as they invested, and that spare cash was recycled back into the financial system, Thomas, of Carlyle Group, estimates. It helped hold down long-term interest rates amid high federal deficits, as did inflation below the Federal Reserve’s 2% target and the Fed buying bonds.

In other words, during big tech’s asset-light period (between approximately 2010 and 2020), the big tech companies were generating so much cash that they helped hold down long-term interest rates in the US, although two other things also helped: low inflation and the fact that the Federal Reserve Bank was buying bonds – i.e. retiring US debt (when the Fed buys bonds, they inject money into the economy).[ii]

Greg now compares that period (which ended just five years ago) to the situation today, which is completely different:

1.      Government deficits are larger now than five years ago, meaning there’s a much greater need for money.

2.      Inflation is now above 2%, the Fed’s target rate.

3.      Instead of on net buying back bonds from the private sector, the Fed is now selling more bonds than it’s buying. This has the effect of restricting the money supply, lowering bond prices and raising interest rates (interest rates move inversely with bond prices).

4.      Corporations now “face steep investment needs to exploit AI and reshore production to avoid tariffs.” This further decreases available cash for corporations, while having no near-term positive effect on profitability.

Greg concludes this section, as well as the column, by saying

All this suggests that interest rates need to be substantially higher in the years ahead than in the years before the pandemic. That is another risk to the economy, and these companies, that investors may not fully appreciate.

Here’s my summary of Greg’s column:

1.      In the near term, AI investments by major tech companies are not improving their bottom lines. At the same time, those investments are greatly decreasing the amount of cash available for other investments (surprisingly enough, AI isn’t the only good area for investment today! For example, I’m receiving a lot of spam emails for burial insurance. Do they know something I don’t?).

2.      This won’t cause a stock market crash, but it may lead to a lot of disappointed investors and falling stock valuations, especially if the anticipated huge returns don’t show up for 5-10 years longer than expected.

3.      The above means the money supply is tightening and interest rates are rising. Don’t look for either of these trends to be reversed for a long time.

Note that Greg doesn’t mention one big cost of AI, probably because it doesn’t affect the profitability of AI companies (which is his main concern, since he writes about investments). This is the huge, mostly uncompensated charges that electric ratepayers will have to pay to build out (or refurbish) the power generation capacity to support AI in the future.

Because of the scale of this unprecedented buildout, the easiest way to finance it is to bill the ratepayers for a lot of it. This is where the likely delay in returns on AI investment will hurt the most, since some ratepayers in some states will end up paying more for their fair share of the additional investment than will others. By the same token, some AI companies will pay for more than their fair share of that investment, although in general I think the companies’ share, vs. the ratepayers’ share, is more likely to be too low than too high.

I would like to see some organization – EPRI, NARUC, the US Congress (!), the ISO/RTOs, etc. – conduct a comprehensive study of the question of how to fairly allocate the costs of the required grid buildout across the US (and perhaps in Canada, at least in the provinces like Ontario and Quebec that sell a lot of power to the US). It won’t be easy at all, but otherwise I think we’ll end up with a big mess on our hands and lots of bad feelings to boot.

This concludes the first post in this series; it discusses the costs of the AI buildout. The second (and I believe last) post will discuss the benefits of the AI buildout and try to weigh costs and benefits. Spoiler alert: I believe that, in the long run, there’s no question that AI will provide a huge net benefit, not only to North America but to the whole world. However, as the great economist John Maynard Keynes said, “In the long run, we’re all dead.”

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com, or even better, sign up as a free subscriber to the Substack community chat for my subscribers and make your comment there.


[i] Nuclear energy isn’t technically renewable, since nuclear plants require a constant supply of uranium. However, nuclear plants don’t produce greenhouse gases like fossil fuel plants do.

[ii] In fact, the Fed borrowing money by selling bonds is the main source of the money supply. Currency is just a small part of money; most of it is in the form of bank deposits. When the Fed wants to inject money into the economy, they buy bonds. When they want to withdraw money from the economy, they sell bonds. These are called “open market operations”, and they’re the key instrument of monetary policy.

Saturday, August 2, 2025

My move to Substack


This post replaces the one I put up on July 30; it corrects a few mistakes and includes new information. I have removed the previous post to avoid confusion.

I’ve just started a new blog on Substack called “Tom Alrich’s blog, too”. Until August 11, all my new posts will appear there, as well as here in Blogspot. After August 11, my new posts will only occasionally appear in this blog, but they will be available to paid subscribers on Substack. The 1200+ posts I’ve written on Blogspot since 2013 will remain available for free here; they are also now available to paid subscribers on Substack.

A subscription to the Substack blog costs $30 per year (or $5 per month); anyone who can’t pay that should email me. These are the minimum amounts I can charge on Substack; note that the entire subscription fee is passed on to me. There is also a Founders subscription plan at $100 for the first year. I hope you’ll consider signing up for that if you have appreciated my posts so far. After August 11, people who have chosen the free signup option on Substack won’t be able to read my new posts, unless they upgrade to a paid subscription.

I made this move for two main reasons. First, as you may know, Substack has become the premier blogging platform (not just for textual blogging like I do, but video and audio posts as well). It provides me an amazing amount of information on how my posts are being received and options for delivering the posts, as well as other capabilities like a community chat that is available for all subscribers (paid and free). I hope that chat will become a lively forum for discussing topics related to what I write about in my posts. Any subscriber to the blog can post questions to the whole group.

The other reason why I made this move is because I decided I can’t continue to produce new posts without either charging for access or including advertising - and I really don’t want to have advertising.

To make a long story short, if you wish to continue to read my posts after August 11, whether on the web or in the Substack app, please sign up for a paid subscription in Substack at this link. I hope you’ll stay with me!

P.S. If you currently subscribe to the Blogspot blog through Follow.It, your subscription will remain in effect, since I will continue to publish occasional posts on Blogspot. However, tomorrow you will also be enrolled as a free subscriber in Substack. That will get you free access to all new posts until August 11.

If you want to continue to receive my posts after that date, as well as get access to the 1200+ “legacy” posts in Substack, you need to upgrade to a paid subscription. If you don’t want to upgrade but still want to keep access to the community chat and occasional new posts, you don’t have to do anything. And if you don’t even want to continue access to the occasional free posts, you should unsubscribe.

Note: If you normally read my posts by clicking on the link I post in LinkedIn, I believe you will still be able to do that after August 11, although you will need a paid subscription to the Substack blog. If that doesn’t work properly, please email me.

 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com, or even better, sign up as a free subscriber to the Substack community chat for my subscribers, and make your comment there.

 

Thursday, July 31, 2025

Don’t worry about the CVE program – it’s in good hands. But the NVD? Not so much


Note from Tom: I’ve started a new blog on Substack called “Tom Alrich’s blog, too”. From now on, all my new posts will appear there; they will only occasionally appear in this blog. A subscription to the Substack blog costs $30 per year (or $5 per month); anyone who can’t pay that should email me. There is also a Founders subscription plan at $100 for the first year. I hope you’ll consider signing up for that, if you have benefited from my posts in the past.

I will put up all new posts for free in this blog until August 11. However, if you wish to continue to see my posts after August 11 – which I hope you will! – please sign up for a paid subscription in Substack at the link above.

My previous post discussed a new white paper called “Ensuring the Longevity of the CVE Program” by the Center for Cybersecurity Policy and Law. To say that I wasn’t overwhelmed by the insights provided by the authors is an understatement. However, the biggest problem with the paper is the fact that it left out the biggest threat to the future of the CVE Program. That threat lies with a different US government agency that has recently been having big problems, although of a quite different kind. First, I’ll provide some background information on the problems, and why they affect the CVE Program.

The CVE Program is run by the non-profit MITRE Corporation, which is contracted by the Department of Homeland Security. It is paid for - at least through next March - by the Cybersecurity and Infrastructure Information Agency (CISA), which is also part of DHS.

The other US government agency is NIST, the National Institute of Standards and Technology. NIST is part of the US Department of Commerce.

One of NIST’s many projects is the National Vulnerability Database (NVD), which started in 2005. A vulnerability database links software products with vulnerabilities that have been identified in those products. The NVD is currently by far the most widely used vulnerability database in the world; many private (VulnDB, VulDB, VulnCheck, Vulners, etc.) and public (Japan Vulnerability Notes, EU Vulnerability Database, etc.) vulnerability databases draw heavily from the NVD.

The NVD identifies vulnerabilities using CVE numbers (e.g., CVE-2025-12345); each vulnerability is described in a “CVE record”. Many people (including me, a few years ago) assume that, because the NVD uses CVE numbers to identify vulnerabilities, it must be the source of CVE records. In fact, CVE records originate with the CVE Program in DHS. They are created by CVE Numbering Authorities (CNAs), of which there are currently more than 450. The largest CNAs are software developers, including Microsoft, Oracle, Red Hat, HPE, and Schneider Electric.

When a CNA creates a new CVE record, they submit it to the CVE.org vulnerability database, which is run by the CVE Program (this is sometimes referred to as the “CVE list”, although it’s much more than a simple list). The NVD (and other vulnerability databases that are based on CVE) downloads new CVE records shortly after they appear in CVE.org.

When a CNA creates a new CVE record, they have the option of including various information in the record. Some fields are officially optional and others are mandatory, but, to be honest, there are only a few fields that are really mandatory, in the sense that the record will definitely be rejected if they’re not present (this includes the CVE number and the product name). The CVE Program maintains the CVE Record Format (formerly, the “CVE JSON Record Format”), which is now on version 5.1.1. The full spec for 5.1.1 is here, but this older version is more readable and reasonably up to date.

For our present purposes, the most important fields in the CVE record are:

1.      The CVE number that the CNA has assigned to this vulnerability, as well as a description of the vulnerability.

2.      The name(s) of the product(s) affected by the vulnerability. While the CNA must list at least one affected product, they can also list many of them, including separate versions of the same product. Of course, every product listed needs to be affected by the vulnerability described in the record.

3.      The vendor(s) of the product(s) affected by the vulnerability.

4.      The version or versions[i] affected by the vulnerability.

5.      The CPE name for each affected product.

The last item needs explanation. CPE stands for Common Platform Enumeration, although the name doesn’t carry much meaning today. What’s important is that CPE is a complicated machine-readable naming scheme for software and hardware products; the CPE name includes fields 2-4 above. If a CVE record doesn’t include a CPE name, it isn’t easily searchable in the NVD, since there is no way to know for certain that the product described in the text of a CVE record is the same product that is the basis for a similar CPE name.

For example, suppose items 2-4 above appear as “Product A”, “XYZ”, and “Version 2.74” respectively in the text of a CVE record. Furthermore, suppose that a user of Product A v2.74 wants to learn about vulnerabilities identified in that product. They find the CPE name for a similar product that includes the same values of fields 2 and 4, but it includes “XYZ, Inc.” instead of “XYZ” for the vendor name.

Are these in fact the same product? That depends on the application. If the vulnerable product were a throwaway product used in the insurance industry, the match might be considered perfect. On the other hand, if the vulnerable product was an electronic relay that could, if compromised, open a circuit breaker and black out a large section of Manhattan, this might not be considered a match at all.

In other words, due to the arbitrary nature of the fields included in CPE names, such as “vendor” and “product name” (both of which can vary substantially, even when the same product is being described), there will always be uncertainty in creating a CPE name. This means that two people could follow the CPE specification exactly, yet create different valid CPE names for a single software product. The NVD has reserved the right for their staff members to create CPE names for vulnerable products described in the text of new CVE records and add them to the records (a process called “enrichment”); however, there is simply no way to know for certain what values the staff member used for the fields in a CPE name they created.

This arbitrariness, along with other serious problems[ii], makes it close to impossible to fully automate the process of looking up software vulnerabilities in the NVD. In other words, someone searching the NVD for vulnerabilities that affect a particular product must guess the values for the fields used by the NVD staff member when they created the CPE name for that product. There is no way to be 100% certain that a product in the real world corresponds to a product described in a CVE record, unless they have identical CPE names.

But that isn’t the worst problem with CPE. The biggest is that, since February 2024, the NVD has drastically neglected their responsibility to create CPE names and add them to new CVE records. This has resulted in more than 50% of CVE records created since that date not including a CPE name(s) for the affected product(s) listed in the record.[iii]   

The problem with this is straightforward: a CVE record that doesn’t include a CVE name for the vulnerable product isn’t visible to an automated search, since CPE is currently the only machine-readable software identifier supported by the CVE program and the NVD.[iv] Without a CPE name, the user will have to search through the text in over 300,000 CVE records, although even then there is no such thing as a certain identification (remember “XYZ” vs. “XYZ, Inc.”?).

This is compounded by the fact that the NVD will provide the same message, “There are 0 matching records” when a product truly has no reported vulnerabilities, as when the product has a lot of reported vulnerabilities, but they don’t have CPE names included in them. Of course, human nature dictates that most people seeing that message will assume the former interpretation is correct, when it might well be the latter.

You may wonder why I’m pointing out the above as a serious problem for the CVE Program, when this is mostly the NVD’s fault (and they’re in a different department of the federal government). The problem is that, given the over 300,000 CVE records today - and the fact that new records are being added at an increasing rate (last year, 40,000 were added, vs. 28,800 in 2023) – it is impossible to perform truly automated vulnerability management. I define that as a single process that goes through an organization’s software inventory, looks up all those products in the NVD or another vulnerability database, and identifies all open vulnerabilities for those products (the next action would be remediation, or at least bugging the supplier to patch the vulnerabilities. This can’t be fully automated).

A vulnerability record without a machine-readable software identifier isn’t complete; it’s like giving somebody a car without a steering wheel. Until the CVE Program can ensure that every CVE record has a reliable identifier for any affected product described in the text of the record, they will receive an “Incomplete” record from me.

If you would like to comment on what you have read here, I would love to hear from you. Please comment below or email me at tom@tomalrich.com.


[i] While it would certainly be better to specify a version range in a CVE record than just enumerate affected versions, in fact version ranges are a very difficult problem, as I discussed in this post. It is fairly easy to specify a version range in a CVE record, but, unless the end user has a way of utilizing that range as part of an automated vulnerability management process in their environment, it’s useless to include it in the record in the first place.

[ii] Some of CPE’s problems are described in detail on pages 4-6 of this 2022 white paper on the software identification problem. It was written by the OWAS SBOM Forum, a group that I lead.

[iii] The NVD has somewhat improved their record for enrichment, but it seems a lot of their recent effort isn’t being well directed.

[iv] That will change when the CVE Program starts supporting the purl identifier, although the NVD might not support purl right away (other vulnerability databases probably will support it).

Wednesday, July 30, 2025

I’m moving to Substack!

I’ve just started a new blog on Substack called “Tom Alrich’s blog, too”. From now on, all my new posts will appear there; they will only occasionally appear in this blog (which is on the Blogspot platform). I decided that I can’t continue to produce new posts without either charging for access or including advertising, and I really don’t want to have advertising.

I will put up posts for free on both platforms until August 11, after which I will only put up all new posts on Substack – although I will occasionally put up new posts here. However, this blog (on Blogspot) will continue, since I don’t want to remove the 1200+ posts that I put up between January 2013 and today (although I intend to copy them all into Substack as well). As you may know, I link to previous posts very often. All those links would need to be changed if my previous posts were removed from Blogspot.

A subscription to the Substack blog costs $30 per year (or $5 per month); anyone who can’t pay that should email me. These are the minimum amounts I can charge on Substack; note that the entire subscription fee is passed on to me. There is also a Founders subscription plan at $100 for the first year. I hope you’ll consider signing up for that if you have appreciated my posts so far. Note that after August 11, people who have chosen the free signup option on Substack won’t be able to read my new posts, unless they upgrade to a paid subscription.

As you may know, Substack has become the premier blogging platform (not just for textual blogging like I do, but video and audio posts as well). It provides me good information on how my posts are being received, as well as other capabilities like a group chat for all subscribers (paid and free). I hope that will become a lively forum (there have been some lively discussions around my posts in LinkedIn, but not enough for my taste). The important feature of my Substack chat is that anybody will be able to post a question to the whole group; they won’t have to wait for a post that somehow touches on that question.

To make a long story short, if you wish to continue receiving my posts after August 11 – which I hope you will! – please sign up for a paid subscription in Substack at the link above.

Note: If you normally read my posts by clicking on the link I post in LinkedIn, you will still be able to do that. However, after August 11 you will only be able to read the occasional post that I put up on Blogspot, rather than all of my posts, which I will put up on Substack. Please sign up for a paid subscription in Substack.

If you would like to comment on what you have read here, I would love to hear from you. Please comment below or email me at tom@tomalrich.com.