Monday, May 19, 2025

How can we prevent AI from swallowing the power grid?

 

Note: This is a rewrite of my post of May 17. I did this to make it clearer and to focus on use of power by AI data centers, which I consider the biggest problem with AI today.

One of the biggest problems facing the power industry today is the huge electricity demands of AI data centers. This includes both those in place today and, more importantly, those being projected by data center developers, often without having a firm idea of where the necessary power will come from. Naturally, most of the discussions about this problem have focused either on how to greatly expand power supply, at least in the longer term, or on how to make the data centers more efficient, so they can do what they’re currently doing while using somewhat less power.

However, it seems to me that the real question we should be asking about AI is why it’s placing such huge demands on the power grid in the first place. After all, if “boiling the ocean” is an apt description for anything, it is for language model training, especially when the models are complex, the problems are complex, or the data aren’t well defined. Does all of this come with the territory? That is, if your goal is to recreate intelligence, is it inevitable that sooner or later you’ll have to throw every kilowatt you can find at the problem?

I’m sure this doesn’t come with the territory. After all, it’s well known that an AI model can’t even approach the level of intelligence of the common housecat. Stated another way, if there were a good way to embed a smart AI in a cat’s body, it’s highly unlikely that the AI would be anywhere near as good at catching mice as the cat is.

Of course, many people – including many scientists – used to consider any sign of animal intelligence to be simply pre-existing programming - i.e., firmware inherited from the animal’s parents. However, it’s impossible to hold such a position today. After all, what about chimpanzees that use tools? What about ravens who go back and dig up a bauble they’ve buried if they notice that a human, or another raven, was watching them when they buried it? What about Alex the Grey Parrot, whose last words were “You be good. I love you”? We should all end so fittingly.

For that matter, a living creature doesn’t need a brain to exhibit intelligence. A slime mold, which consists of a collection of disconnected cells without any central intelligence, can seek out fragrant food, even when there’s an uncomfortable barrier in the way. A Venus fly trap can count to five. And it isn’t hard to design cellular automata that exhibit intelligent behavior, even though their electronic “bodies” don’t contain a single carbon atom.

However, let’s suppose that an AI model could outwit a mouse half as intelligently as a cat could. Even if that were true, I would want to know how the likely huge amount of energy required to train that model compares with the tiny amount of energy needed to train the cat’s brain to outwit mice.

After all, it seems likely that the only training that’s required for the cat is the on-the-job training it acquires by watching its mother catch mice – along with presumably some inherited “training” (firmware vs. software). On the other hand, I would bet that training the model would require ingesting millions of documents, videos, etc. Yet the model would almost certainly prove to be nowhere near as successful as the cat for purposes that matter to cats: catching mice and other small mammals.

In other words, there is almost certainly a better way to create artificial intelligence than to boil an ocean of documents. It requires paying attention to what animals do in their day-to-day activities. The intelligence that animals exhibit in those activities goes far beyond the wildest predictions of what AI can do. Even better, no animal needs to consume huge amounts of power to learn how to perform those activities. Maybe, if we just pay attention to what animals can do and then “reverse engineer” how they do it, we will be able to develop truly intelligent systems that will require relatively little power to train or operate.

Here's an example. Let’s say a dog escapes from its home one day and wanders seemingly aimlessly around the neighborhood. There’s no way the dog could remember every tree he watered, every driveway he crossed, every rabbit he chased, etc. Yet somehow, later in the afternoon (perhaps just before dinnertime), he shows up at his home. What’s most remarkable is his master’s reaction to his return: Since the dog wanders off regularly and has never once not found his way home before dinnertime, there is nothing miraculous about his return this time. The master barely notices that his dog has returned.

Yet, it is quite miraculous. The dog doesn’t have a map or GPS. There’s no way the dog can use logic, as we can, to find its way back home. For example, the dog can’t say to itself, “When I was crossing this street a few hours ago and was almost hit by a car, I had just set out from my house. At first, I walked along this side of the street, but I didn’t cross it. Therefore, if I just keep walking along this side of the street for a little while, I’ll probably come home.”

Is there some way the dog can utilize a chain of reasoning like this without “consciously” invoking it? Perhaps, but what is the mechanism behind that result? It certainly can’t be genetic. Even if the dog was born in the neighborhood and has lived there ever since, there’s no known process by which his genome could be altered by that experience.

Could it be training? When the dog was a puppy, did its mother train it to wander around the neighborhood and find its way home? There are certainly animals, like bees and ants, that can find food outside of their “home” (e.g., their beehive) and return to instruct their peers to do the same (the bee sometimes does that by performing a dance that encodes the route it took to find a particularly abundant source of pollen for the other bees). But dogs don’t do that, and of course we’re hypothesizing that the dog was wandering randomly, rather than being in search of food (which he already knows he’ll get at home when he returns).

Yet, the dog did find its way home, and given that this is a common occurrence, it’s clear the dog in question did not utilize any extraordinary power that its fellow dogs (and other types of animals, of course) do not possess. How did it get there?

Of course, I don’t know the answer to this question. However, there are two things I know for sure:

1.      Generative AI is 100% based on statistical relationships between words. The model learns these relationships and uses that data to create whatever content it’s been asked to create. However, the model doesn’t “understand” the words it uses.

2.      Dogs don’t understand words, either. Yet the fact that a dog can demonstrate at least one type of truly intelligent behavior – finding its way home without seeming to have some fixed pattern to follow - seems to indicate there might be a different type of AI that doesn’t require the boiling-the-ocean approach to learning that Gen AI does.

What could explain why the dog can find its way home, if neither genetics nor training can? Again, I can’t answer that question for certain. However, I can point out that infants don’t have any command of words, yet they seem to be able to “reason” based on symmetry. For example, a baby – even a newborn – can recognize an asymmetrical face and react differently to it than a symmetrical face.

It seems that human infants can perform what I call “symmetry operations” without prior experience, and certainly without training. This makes it likely that other mammals like cats and dogs, and especially primates, can also perform those operations. Could symmetry operations contribute to at least some forms of intelligence, including the cat’s ability to outwit mice and the dog’s ability to find its way home?

The point is that various functions (mathematical and otherwise) can be embedded in animals and even plants. An animal might use these functions to solve certain hard problems like, “Given that I’ve been wandering all over the neighborhood this afternoon, how can I find my way home before dinnertime?”

To conclude, how do we create a system that might perform as intelligently as a cat in catching mice, or as a dog in navigation? First, we figure out what ingrained capabilities (like recognizing symmetrical objects) enable this intelligent behavior. Second, we figure out how to recreate those capabilities, either in hardware or software.

In one of my next posts, I hope to examine what are the ingrained capabilities that allow a dog to find its way home. We may learn that dogs are masters of symmetry operations.

To produce this blog, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome. Thanks!

 

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

 

No comments:

Post a Comment