First Things First: The Physics of Causality
Why do we remember the past and not the future? Untangling the connections between cause and effect, choice, and entropy.
by Kate Becker
July 16, 2019
You drop your coffee mug; it shatters on the kitchen tile. The neighbor kid throws a baseball; your back window breaks. Your toddler flushes his rubber ducky; the toilet overflows. First cause, then effect. There’s only one right way to put them in order. It’s so obvious it barely counts as an observation at all. But, sometimes, it’s by questioning the obvious that you turn up the best and most surprising insights.
Our lives are full of experiences that, like cause and effect, only run one way. The irreversibility of time, and of life, is an essential part of the experience of being human. But, incongruously, it is not an essential part of physics. In fact, the laws of physics don’t care at all which way time goes. Spin the clock backward and the equations still work out just fine. "The laws of physics at the fundamental level don’t distinguish between the past and the future," says
Sean Carroll, a theoretical physicist at Caltech. "So how do you reconcile the time symmetric laws of physics with the world in which we live?"
Moreover, we are not passively carried along by time’s river: We make decisions every day in an effort to actively cause the effects we want. Your neighbor’s kid chose to throw his ball recklessly close to your window, after all. Your toddler reasoned that his toy duck would enjoy a swim down the toilet.
Deciding, then, is the living fulcrum of cause and effect. Yet physics has no language to describe this essentially human experience. "Physics and math departments, we write equations. And then there are humanities departments, where people talk about emotions and feelings," says
Carlo Rovelli, a theorist at the University of Aix-Marseille in France. "These things should not be separated. The world is one, and we should find the way to articulate the relationship between these fields."
Now, with the support of two independent
FQXi grants, Carroll and Rovelli are taking different approaches to untangling the messy knot that links the physics of cause and effect to our perceptions. Along the way, they hope to get fresh insight into the nature of time and our very human experience of living in a one-way universe.
To explain why time only flows in one direction, physicists often invoke the one law without a rewind button: the second law of thermodynamics. Developed in the nineteenth century to explain the science of heat and energy transfer in engines, the second law says that the ’entropy’ of an isolated system—a box of gas particles, say—can only go up, not down, over time.
Entropy is often colloquially described as a measure of disorder or chaos. Imagine, for instance, a room that starts off containing hot air (made up of fast-moving particles) on one side and cold air (slow-moving particles) on the other. Wait long enough—and don’t introduce any judiciously placed heaters or coolers—and the particles will collide and mix randomly, evening out the temperature throughout the room. The new, "disorderly" space now has higher entropy than it did back when it was neatly divided into hot and cold areas.
How do you reconcile the time symmetric laws of physics with the world in which we live?
- Sean Carroll
By contrast, if you start with a room with a uniform temperature throughout, it is highly unlikely that a pocket of hot or cold air will form on its own: A thermodynamic system will not spontaneously shift to a more ordered, lower entropy state. Without intervention, the progress toward higher entropy is irreversible.
But how might this lead to the one-directional flow of time we experience? Many physicists extend these thermodynamic ideas about entropy to the universe as a whole. Combine the second law with the fact that the universe started out with very low entropy—a side effect of the extreme gravity of the early universe—and voila, you have an irreversible "arrow of time" that applies to our entire universe and only points toward the future.
It may seem like this solves the problem of the origin of time’s arrow completely. But not all physicists agree. For instance, theorist
Julian Barbour of Oxford University, who has written extensively on the nature of time, argues that notions of entropy originally derived to better understand steam engines cannot be readily applied to cosmic calculations; the steam-confining cylinders of an engine don’t have much in common with an ever-expanding universe. "This makes me very doubtful whether the conventional notion of entropy and the mantra that it must increase can be applied to the universe," Barbour says.
And while the "thermodynamic" arrow of time provides a satisfying explanation for many textbook physics questions, like how particles trapped in a room settle into equilibrium over time, it’s shakier on the more subtle questions: Why do we get older, not younger? Why do we remember what happened yesterday, not what’s going to happen tomorrow? Why do causes always come before effects? The same rules must be working behind the scenes in all these one-way phenomena, but how? "I’d said in talks for years that causes precede effects because the entropy of the universe started out low," says Carroll, "but if anyone said ’prove it,’ I couldn’t."
Carroll’s aim is thus to try to find a concrete mathematical description that will illuminate the hand-waving connection between cause and effect and entropy. While most physics problems have a clear cause and effect, many human problems are a snarl of variables, including some that change over time, some that aren’t immediately obvious, and some that hide behind others. Sometimes cause and effect are easy to work out: You spill a glass of red wine on a white dress, you get a big purple stain. But some cause-and-effect relationships are more complicated. Does drinking a glass of red wine with dinner make you live longer? Does it make cancer cells less likely to grow? Or might drinking a glass of wine be correlated to something else—having a more affluent lifestyle, perhaps—that could influence longevity?
Lost CauseCan entropy explain our perception of time? Credit: agsandrew When causation and correlation are all knotted up, statisticians have a tool to help untangle them: Bayesian networks, graphs that show the relationships between multiple variables. In 2017, Carroll and a team of colleagues from Caltech and the University of California, Berkeley showed that it is possible to derive a new version of the second law of thermodynamics using Bayesian principles (A. Bartolotta
et al.,
Phys. Rev. E 94, 022102 (2016)). They call it the "
Bayesian Second Law of Thermodynamics."
As helpful as they are, however, Bayesian networks alone can’t always distinguish causation from correlation, if the variables are interconnected in complicated ways. In 1995, the computer scientist and philosopher
Judea Pearl, who was thinking about to how computers could use artificial intelligence to uncover links between variables on their own, came up with a solution. Pearl’s "causal calculus" recasts Bayesian networks as equations that can reveal true cause-and-effect relationships.
So Carroll’s idea is to now try to expand the "Bayesian Second Law" to allow for multiple, interconnected variables and then to incorporate Pearl’s mathematical infrastructure. If successful, this will show exactly how cause-and-effect relationships follow from the second law of thermodynamics. Then, Carroll will insert the fact that entropy was low in the past. He hopes that, when all the pieces come together, the equations will reveal not just that cause and effect are bound together, but that they only run one way in time.
"The huge, wonderful goal at the end of the day is that I get to say that causes happen before effects," Carroll laughs.
Though the ideal conclusion of Carroll’s project would just be to confirm what our intuition tells us is true, the work could have a serious impact on how scientists understand causation. "What Sean Carroll proposes is to make this link really tight and rigorous, connecting a formal understanding of entropy with new formal models of causation," says
Craig Callender, a philosopher of science at the University of California, San Diego, who has written a book on the
nature of time. "If successful, it will replace something a bit mushy with something very precise."
While Carroll is occupied with understanding the physical origins of time’s direction, Rovelli is bringing together multiple disciplines to understand how we manipulate cause and effect to successfully shape our futures, one choice at a time. Wake up or hit snooze? Cheerios or cornflakes? Caf or decaf? Walk or take the train? We are "agents" who think, decide, and act, and our actions have ripple effects on our world and on the other people in it.
The standard, hand-waving route from the laws of physics to the experience of being human is that, just as simple pieces of thread come together to make a rich tapestry, the human condition emerges, spectacularly but naturally, from the physics of the tiny particles that make us up. That sounds very poetic but, as Rovelli points out, we do not actually know how this happens.
Rovelli has spent much of his career trying to translate philosophical ideas into the mathematical formalisms of physics and vice versa. We know that agents make choices that affect the future, not the past. Rovelli thus argues that without entropy to set time’s direction, there is no agency.
Carlo RovelliUniversity of Aix-Marseille But how do you rigorously connect an intangible quality like human agency to the material physical notion of entropy? Rovelli’s tactic is to turn to the concept of information, which is already emerging as a potential Rosetta stone between the language of philosophy and the language of physics.
The American mathematician Claude Shannon pioneered the discipline of "information theory" in the twentieth century. He defined entropy in terms of communicating encoded data. Entropy, in this formulation, measures how much the data can be compressed, while still reliably conveying information.
Rovelli linked this information-theory notion of entropy with Darwin’s theory of evolution in his
paper, "Meaning = Information + Evolution," which won first prize in FQXi’s 2016 essay contest. Of all the information bound up in every atom of a living thing, only a tiny fraction is important for that creature’s survival, he pointed out. What does matter? That a flower stem can tilt toward the sun; that a bacterium can propel itself toward food; that a little fish can swim away from bigger fish, fast. By singling out the correlations that keep living things on Darwin’s good side, Rovelli saw a way to separate out "meaningful" information.
Later, Rovelli found out that he wasn’t actually the first person to think about information this way. Philosopher Fred Dretske explored it in a 1981 book,
Knowledge and the Flow of Information. (More recently, mathematician David Wolpert of the Santa Fe Institute, in New Mexico, has also been investigating the idea, as he describes in this FQXi
talk.) But the exercise injected fresh life into Rovelli’s thinking, especially because it gave him a way to define "meaningful information"—and that even slipperier idea, "meaning" itself. "An agent is something that uses information that is meaningful and uses it for deciding," Rovelli says. "I feel I’m seeing a path."
Over the next year or two, Rovelli anticipates drawing up a new mathematical model for the kind of systems he wrote about in the 2016 paper: systems with insides and outsides, in which some correlations are a matter of life and death, and some don’t count at all. He hopes that the model will show how physical laws can coexist with the experience of agency. "The game will be to see both things: The physical, deterministic equations, and on top of it, to see the deciding aspect," Rovelli says.
It may not work, Rovelli admits. But, he says, the effort is worth it. "If we could better make steps toward understanding what we are as human beings—any step in that direction is going to be useful."