Mind and Machine: What Does It Mean to Be Sentient?
Using neural networks to test definitions of ’autonomy.’
by Kate Becker
March 23, 2022
More than two thousand years ago, the ancient Greeks told the story of Talos, an enormous bronze robot charged with defending Crete. Three times a day, every day, Talos trooped around the island’s perimeter, keeping lookout for enemy ships and heaving rocks at any that dared approach. Talos was a machine, but a machine with a difference: Talos was alive.
Pinocchio, Pygmalion, the golems of folklore—they all speak to our preoccupation with how the inanimate becomes animate. Whether brought to life by the magic of a fairy’s wand, a secret word, or the blood of the gods,
ichor, they straddle the line between living and nonliving, mind and machine.
And they have stayed locked in the pages of storybooks—until now. Computers are the new embodiment of these living machines, and the more powerful they become, the more human they seem. Artificial intelligence can best us at tasks once assumed to require uniquely human intellect. AI computers can defeat humans at Go and chess. They can name everyone in our family photos (even the third cousins twice removed) and translate from Yiddish to Norwegian. With each new flicker of creativity and intuition, they are emboldening researchers and philosophers to ask exactly where the line between human and machine really sits.
"I’ve always been interested in the question of what makes us sentient beings separate from our environment," says
Larissa Albantakis, a computational neuroscientist at the Wisconsin Institute for Sleep and Consciousness at the University of Wisconsin-Madison. But the question gets snarled up from the get-go. What do we mean by "sentient," or "conscious," anyway? Albantakis is hoping to untangle the mystery by pulling at the thread of autonomy.
What makes us sentient beings separate from our environment?
- Larissa Albamtakis
Merriam-Webster defines autonomous as "existing or acting separately from other things or people." Sounds simple enough. But the definition starts to break down even when applied to some living things. "Take a slime mold," an organism that can live freely as a single cell but can also swarm with others and fuse to create one coordinated mass, says Albantakis. "It’s not clear if the whole is one thing, or if the individual cells within it are actually individual entities." Is the slime mold autonomous? And if we struggle to answer the question for a living thing, what happens when we ask about something that isn’t even alive? "These are questions that we don’t really have a way to answer yet, and not even a good sense of which qualities are relevant for answering this question."
"Autonomy is like life: We know it when we see it, but defining it is difficult," says
Daniel Polani, an artificial intelligence expert at the University of Hertfordshire, UK. "If we have an ant colony, is the individual ant an autonomous system? Is the colony an autonomous system? If two colonies merge, is the whole system autonomous? Suddenly, autonomy is not a well-defined notion anymore."
Researchers working in information theory, causal theory, and dynamical systems have all come forward with definitions that reflect the attitudes and ways of thinking of their chosen fields. Yet there is still no accepted consensus on how to define or quantify autonomy. To edge closer to one, Albantakis is placing the definitions in head-to-head competition. Which will deliver the most coherent measure of autonomy?
In the popular imagination, the gold standard of human-like computer intelligence is the Turing test. The concept is simple: If a computer can fool a human conversation partner into believing it is a person, then it passes the test. How the computer accomplishes that task, whether it attaches meaning to the words passing in and out of its processors, is beyond the scope of the test. The Turing test is exclusively about behavior that you can observe on the outside.
Larissa AlbantakisUniversity of Wisconsin-Madison That is consistent with the "functionalist" perspective that prevails in contemporary neuroscience, says Albantakis. But, she argues, autonomy is something that happens on the
inside. After all, the exact same behavior can be performed autonomously or reflexively. A can-can dancer and a patient tapped with a reflex hammer both kick, but only one of them is doing so autonomously. To figure out whether an act is truly autonomous, then, you must ask not just
what the action is, but
how it is happening.
One reason today’s AI systems can rival—and sometimes best—human intelligence is that their "thinking" happens over networks of connected nodes that are roughly analogous to the neurons inside a living brain. Over time, as the AI learns, the connections between nodes adapt to become weaker or stronger, and the system gets better at its job, whether that’s telling pictures of cats from pictures of dogs, playing Space Invaders, reading lips, or any number of tasks at which neural networks excel.
But human brains can do something these neural networks typically can’t: create connections where there were none before. Think of a neural network as a city’s traffic grid: Most artificial systems are limited to adding and subtracting lanes. Living brains, on the other hand, come equipped with the machinery to build entirely new streets, tunnels, and bridges that bring far-flung neurons into close communication, or create new loops.
To probe the differences between various measures of autonomy, Albantakis wanted to apply them to an AI that, like a living brain, has the capacity to evolve new connections. She found the ideal test subjects in artificial organisms called "animats." Albantakis’ animats come from a particular "species" called Markov Brains, developed by computational biologist
Chris Adami of Michigan State University, in East Lansing. They are made up of tiny neural networks consisting of just a few neurons, plus motors and sensors that enable them to move in and sense the environment. (Albantakis’ animats are computer simulations, but a handy engineer could build real ones without much trouble.)
Albantakis set her animats loose in a series of simulated mazes, giving them a variety of visual signals to cue them to turn right or left (see movie). Iterating over multiple trips through the maze, many animats evolved to solve the maze perfectly.
On the outside, these high-achieving animats were indistinguishable. They all displayed the same behavior: perfect maze-solving. But Albantakis discovered that they were actually very different "under the hood." In some, each neuron handed a signal off to the next, like "flips along the processing line." This is what computer scientists call "feed forward" architecture. Other animats had evolved to include feedback connections, with signals looping through neurons and back again. Computer scientists call these "recurrent" networks.
If feedforward systems resemble the nervous system impulses that trigger a knee-jerk reflex, recurrent systems are more like the brainwork that goes into a can-can dancer’s kick: listening to the music, recalling the steps she’s learned, attending to exactly how her calf extends from her ruffled petticoats. Only the recurrent systems should truly "count" as autonomous, argues Albantakis.
But only some measures of autonomy were able to capture the difference between feedforward and recurrent architectures. "Some measures ultimately capture features of the environment rather than of the agent itself," says Albantakis. "We want to base our measure of autonomy on a causal description of its underlying mechanisms rather than observed correlations."
Insights from these artificial organisms could one day illuminate what happened deep in biological history, when inanimate molecules first combined to become primordial living things.
Biologists don’t typically examine questions like autonomy. "Work on concepts such as autonomy has been largely dismissed by mainstream biologists as ’untestable speculation about old fashioned ideas that do not really qualify as science,’" says
Keith Farnsworth, a theoretical biologist at Queen’s University Belfast, UK. But Albantakis’ "genuinely quantitative effort" and her willingness to cross traditional boundaries between disciplines may change that, Farnsworth argues.
"If we have some abstract notion of autonomy, we can go back to the origins of life and living systems and we can see whether interacting proteins form a rudimentary autonomous system," says Albantakis. "More generally, if we have a working measure of autonomy, we can then apply it to all sorts of systems, biological or not—individuals or groups of individuals, interacting proteins, neurons, or brain regions—and quantify the degree to which they are independent from the environment."
Much of this work is still speculative, but it’s just the kind of cross-disciplinary thinking that excites Polani. "The interesting stuff happens at the boundary," he says—a thought that rings just as true at borderlands between academic disciplines as it does at the unmapped frontier of autonomy, consciousness, and life.