The Thermodynamic Limits on Intelligence: Q&A with David Wolpert
Calculating the energy needed to acquire and compute information could help explain the (in)efficiency of human brains and guide the search for extra-terrestrial intelligence.
by Miriam Frankel
March 25, 2022
The question of just what exactly distinguishes living matter from dead lumps of atoms, such as rock, is one of the greatest mysteries of physics. Even our best theories can’t really describe life, agency, consciousness or intelligence. But, with the help of an
FQXi grant of over $118,000,
David Wolpert, a physicist at the Santa Fe Institute, New Mexico, is trying to crack the puzzle with information theory and statistical physics. He wants to understand what constitutes and limits intelligence—defined as the ability to acquire information from surroundings and use it for computation in order to stay alive. The findings may one day help explain why human brains aren’t more efficient—and how to best search for intelligent life in the universe.
You propose that intelligence is intimately tied to information gathering. As such, it makes sense that information theory, first developed by Claude Shannon in the 1940s, is a good approach to understanding it. But you are also combining this analysis with statistical physics, an approach normally used by physicists working on thermodynamics—the science of heat and energy transfer—and trying to consider the properties of large groups of atoms. What’s the benefit of considering intelligence in terms of statistical physics or thermodynamics? And what challenges come with applying it to living systems?When it comes to life, it’s very natural to use statistical physics because you’re trying to generalize from many different biochemical systems—all of which are far away from thermodynamic equilibrium, and rely on thermodynamic phenomena to stay that way, on what are called ’equilibrium systems.’ Think for example of a cup of hot tea cooling down to the same temperature as its surroundings—a state of equilibrium—allowing no change after that. But if you look at everything around us—all the interesting systems—they’re not at equilibrium.
Do you mean, for example, biological processes such as those responsible for pumping certain ions into our cells and thereby creating a higher concentration of such ions inside them compared to outside—a state out of equilibrium?Yes, or, as another example, the biological process of a journalist interviewing a nerd over Zoom. None of these processes are in equilibrium. In fact, arguably, equilibrium means death. Certainly, equilibrium means no intelligence.
We might gain much greater insights into living organisms here on Earth as well into the origins of life.
- David Wolpert
Fortunately though, there was actually a bit of a revolution in statistical physics, starting in the 21st century, with the birth of the field of non-equilibrium statistical physics, or "stochastic thermodynamics" as it is often called. This new field was not just complicated math built on the previous maths. It was more like "wait, let’s look at this thing a little differently"—and then when you’ve done that, you’ve knocked open the piñata, and all this formalism falls out. It has basically started to explode. And it provides us with a way of doing statistical physics for evolving systems that are arbitrarily far away from thermal equilibrium.
So how can this help us understand intelligence?When you’re getting "semantic information"—information which carries meaning for a given system—from the environment, you are outside of thermal equilibrium. If we were to stop that information from coming in, if we were to intervene in the dynamic process coupling the system to its environment, that system would relax to thermal equilibrium (Kolchinsky, A. & Wolpert, D. H.
Interface Focus. 82018004120180041 (2018)). Imagine you are a cell and you can go right or left—and there’s more food to your right and less food to your left. You probe to see in both directions where the amount of food is going up. If you don’t do this, you’ll run out of food and die.
You are trying to work out how much energy it takes to maintain intelligence in this way under various limiting constraints. Why?David WolpertSanta Fe Institute Whenever you do a very simple computational operation, such as the very simple operations underlying the functioning of neurons in the brain, there’s a minimal thermodynamic cost, that is, a minimal amount of energy required. We know the minimum energetic cost of simply erasing one bit—0 or 1—of information is equivalent to something called the Boltzmann’s constant (which is tiny on the order of
10
-23) multiplied by the temperature and the natural logarithm of 2 (roughly 0.69). This is teeny. Even if you scale it up to the human brain or to living systems, it’s insignificant. So why is it that with systems such as the human brain, where there’s massive evolutionary incentives to reduce the cost of intelligence, the best we can do is get it down to the brain consuming 20% of all your calories? Or as another example, why is it that the digital computers we use, such as your phone and your Mac, use so much energy despite all these brilliant computer engineers trying to use as little energy as possible? Why is it that cells require so much energy to do the very, very simple kinds of intelligence that they do? It must be because you can’t actually get anywhere close to the tiny theoretical limit. There must be other constraints on how the system can behave that keep it from being able to get anywhere close to the theoretical minimum energetic cost.
What might those constraints be?Your brain is constrained in a way that whatever intelligent actions you do, it has to be built out of neurons—with noisy kind of stuff going on. That’s a massive constraint. And digital engineers making your phone and your Mac are under the constraint that they have to use what’s called CMOS (complementary metal oxide semiconductor) technology. Those constraints must be what’s causing the energetic needs of intelligent systems to be so large. We know what these constraints are in many real systems. But for much less sophisticated systems, where we’re dealing with much simpler sets of more fundamental constraints, this relationship between energy, constraints and intelligence is still very, very complicated.
Can this help us better understand human intelligence?We might gain much greater insights into living organisms here on Earth as well into the origins of life. If we can understand the relationship between the constraints, the energetics and intelligence, we might be able to understand questions such as "Why the hell can’t we have brains that are much less costly?" There are great incentives to evolution to keep us as dumb as possible. So to understand the evolutionary history of humans, why evolution decided with us, to keep going despite the costs, we need to learn about the thermodynamic characteristics, the energetic constraints and energetic requirement of intelligent systems.
Could this help build better intelligent systems, such as AI?Hopefully, it could one day help us build more energetically efficient ones.
What other insights might this line of research lead to?If we could understand this, and the general physics of it, that would give us a way to be able to understand, for example, what we’re looking at when we send probes to the Jovian atmosphere. If there are living intelligent beings on Jupiter, they’re not going to be like us, made out of organic chemical molecules. They’ll be something else—something very strange. If there are creatures living in the photosphere of the sun— building huge civilizations there—they would not be like us. However, if we can find systems whose physics is the same as the physics that we identified for intelligent systems, then we’ll be in a better place to discover it.