Saturday, May 17, 2025

We need more intelligence and less artificiality

 

5/19: I rewrote this post on Monday to make it more concise and emphasize the point about power consumption. You might want to check that one as well.

What is intelligence? The only good definition I’ve heard is that it’s what is measured by intelligence tests. Thus, I strongly doubt there’s any simple way to compare the intelligence of say the smartest AI in the world to the intelligence of a dog or cat. After all, even if there were a good way to embed a smart AI in a cat’s body, is it likely the AI would be as good at catching mice as the cat is? Almost certainly not.

Of course, many people – including many scientists – consider any sign of animal intelligence to be simply pre-existing programming, i.e. firmware inherited from the animal’s parents. This was certainly what I grew up on. I remember one teacher stating with a knowing smile that Doctoral candidates in biology often failed oral exams when they spoke about an animal’s actions as if there were intelligence behind them. In those benighted days, it was simply axiomatic that animals didn’t have intelligence worth speaking of. If any Doctoral candidate mentioned this idea even metaphorically, they were literally jeopardizing their careers.

Of course, it’s impossible to hold such a position today. After all, what about chimpanzees that use tools? What about ravens who go back and dig up a bauble they’ve buried if they notice that a human, or another raven, was watching them when they buried it? What about Alex the Grey Parrot, whose last words were “You be good. I love you”? We should all end so fittingly.

For that matter, a living creature doesn’t need a brain to exhibit intelligence. A slime mold, which consists of a collection of disconnected cells without any central intelligence, can seek out fragrant food, even when there’s an uncomfortable barrier in the way. And it isn’t hard to design cellular automata that exhibit intelligent behavior, even though their electronic “bodies” don’t contain a single carbon atom.

However, what seals the deal for me is the huge amount of energy consumed by training an AI, vs. the tiny amount of energy required to train the cat’s brain to catch mice. After all, it seems the only training that’s required for the cat is the on-the-job training it acquires by watching its mother catch mice. Surely, there must be a better way to train AI than having it ingest literally everything available in English on the internet.

There is a better way. It requires paying attention to what animals do in their day-to-day activities. Those everyday activities go far beyond the wildest predictions of what AI, as currently conceived, could possibly do. Even better, no animal needs to consume huge amounts of power to learn how to perform these activities. Maybe, if we just pay attention to what animals can do and then “reverse engineer” how they do it, we will be able to develop truly intelligent systems.

Here's an example. Let’s say a dog escapes from its home one day and wanders seemingly aimlessly around the neighborhood. There’s no way the dog could keep a record of every tree he watered, every driveway he crossed, every rabbit he chased, etc. Yet somehow, later in the afternoon he shows up at his home. What’s most remarkable is his master’s reaction to his return: Since the dog wanders off regularly and has never once not found his way home before dinnertime, there is nothing miraculous about his return this time.

Yet, it is quite miraculous. The dog doesn’t have a map or GPS. There’s no way he can use logic – as we can – to find his way back home. For example, he can’t say to himself, “When I was crossing this street a few hours ago and was almost hit by a car, I had just set out from my house. At first, I walked along this side of the street, but I didn’t cross it. Therefore, if I just keep walking along this side of the street for a little while, I’ll probably come home.”

Is there some way the dog can utilize a chain of reasoning like this, without “consciously” invoking it? Perhaps, but what is the mechanism behind that result? It certainly can’t be genetic. Even if the dog was born in the neighborhood and has lived there ever since, there’s no proven process by which his genome could be altered by that experience.

Could it be training? When the dog was a puppy, did its mother train it to wander around the neighborhood and find its way home? There are certainly animals, like bees and ants, that can find food outside of their “home” (e.g., their beehive) and return to instruct their peers to do the same (the bee sometimes does that by performing a dance that encodes the route it took to find a particularly abundant source of pollen for the other bees). But the dog didn’t go in quest of anything, and it wasn’t following a fixed set of instructions to get wherever it went.

Yet, the dog did find its way home, and given that this is a common occurrence, it’s clear the dog in question did not utilize any extraordinary power that its fellow dogs (and other types of animals, of course) do not possess. So, how did it get there?

Of course, I don’t know the answer to this question. However, there are two things I know for sure:

1.      Generative AI is 100% based on statistical relationships between words. The model learns these relationships and uses that knowledge to create whatever content it’s been asked to create. However, the model doesn’t “understand” the words it uses.

2.      Dogs don’t understand words, either. Yet the fact that a dog can demonstrate at least one type of intelligent behavior seems to indicate there might be a different type of AI that doesn’t require the boiling-the-ocean approach to learning that Gen AI does. Maybe we wouldn’t have to continue our headlong rush to dedicating most of our electric power infrastructure to data centers. Maybe we could leave a little power for..you know..heating homes and things like that?

So, what could explain why the dog can find its way home, if it doesn’t have a wordless Gen AI model embedded in its skull? Again, I can’t answer that question for certain, but I can point out that infants don’t have any command of words, yet they seem to be able to “reason” based on symmetry. For example, a baby – even a newborn – can recognize an asymmetrical face and react differently to it than a symmetrical face.

How does the baby do this? It’s simple: If the baby has a working knowledge of the mathematics of group theory – the “theory” that underlies symmetry – it will have no problem determining when a face is asymmetrical. Perhaps you wonder how a baby can understand group theory. The answer to that question is simple: it might be similar to how a Venus fly trap can count to five.

The point is that various functions (mathematical and otherwise) can be embedded in animals and even plants. An animal might use these functions to solve certain hard problems like, “Given that I’ve been wandering all over the neighborhood this afternoon, how can I find my way home before dinnertime?”

To conclude, how do we create a system that might perform as intelligently as a cat in catching mice, or as a dog in navigation? First, we figure out what ingrained capabilities (like recognizing symmetrical objects) enable this intelligent behavior. Second, we figure out how to recreate those capabilities, either in hardware or software.

In one of my next posts, I hope to examine what are the ingrained capabilities that allow a dog to find its way home. We may learn that dogs are masters of group theory.

To produce this blog, I rely on support from people like you. If you appreciate my posts, please make that known by donating here. Any amount is welcome. Thanks!

If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com. And while you’re at it, please donate as well!

 

No comments:

Post a Comment