Infinite space may be mind-blowing, but infinite possibility is far weirder. Stars, galaxies, and gas clouds fill space by the trillions, no two exactly alike. Down on Earth, evolution creates "endless forms most beautiful and most wonderful," as Darwin famously wrote. Human culture is ceaselessly creative. Even when artists trod familiar ground--when pop stars strum the same chord progressions or authors write yet another biography of Abraham Lincoln--they manage to create something never before seen.
Computer systems conspicuously lack this open-endedness. They'll do a task you spell out for them, and they might even surprise you with an out-of-left-field solution. But then they stop. "We want to create new solutions forever," said Olaf Witkowski of Cross Labs, an artificial intelligence (A.I.) skunk works based in Kyoto, Japan. "And that's the problem of A.I.: It plateaus." Witkowski spoke on the issue at the last FQXi conference:
[youtube:WndzlE8Vf0Q, 560, 315]
This has been the central problem in the digital simulation of life ever since John von Neumann pioneered the field in the 1940s. If you take the view that what you cannot create, you do not understand, the closed-endedness of machine systems suggests there's something about the fecundity of the natural world which scientists aren't quite getting. "We don't even know why it's difficult," Witkowski said. "We just notice it is."
On a practical level, the tendency of machines to reach a dead-end limits what they can do, ruling out almost any task that demands flexibility, initiative, and lateral thinking. "Many important possibilities for A.I., including seemingly mechanical tasks like self-driving, actually may not be possible without cracking open-endedness," said Ken Stanley, formerly at Uber's A.I. labs, now at OpenAI in San Francisco.
Most machine-learning systems are goal-oriented by design. You describe what the solution should look like by providing sample data or a set of constraints to satisfy. The machine will then find the optimal answer. Although this technology adopts the terminology of biology--"neural" networks and artificial "life"--it is also deeply rooted in physics. For my forthcoming book on physics and the mind, I interviewed John Hopfield and others who pioneered the subject.
But even systems that are built expressly to mimic the natural world, such as genetic algorithms and other evolutionary approaches, converge on an answer and then get stuck. In a sense, this is expected: No finite system can explore indefinitely; it necessarily starts repeating itself. The puzzle is that a machine stagnates well before reaching that point. Size, alone, can't be the issue. If you make it bigger or give it more time, it doesn't explode in creativity. "It learns more of the same," Witkowski said.
Divergent Search
To try to overcome these limitations, Stanley and his colleague Joel Lehman developed the concept of "divergent search," a kind of computerized brainstorming that explicitly seeks a diversity of solutions. Witkowski, for his part, is fascinated by the creative power of language. He assembles his A.I. systems into teams that perform better if they talk to one another. They create their own private language, which not only lets them collaborate but also structures their explorations.
A web search is a convergent search. Google ranks web pages by how often they are linked to--it converges on the most popular answer to your query. You enter "grandfather" and typically get a screen full of images of white-haired white men. Divergent search doesn't presume to rank. It instead seeks the broadest possible range of answers. With "grandfather,"聽it would return the full spectrum of human grandfathers as well as grandfather clauses in legal contracts, the grandfather paradox in time travel, Grandfather Mountain in North Carolina, and so on. The distinction between convergent and divergent thinking goes back to the psychologist Joy Paul Guildford in the '50s.
Stanley and his colleagues adopt this principle for evolutionary computation, a form of digital breeding in which the computer juggles multiple possible solutions to a problem, ranks them by fitness, selects the best, combines or randomly alters them, and repeats. Rather than ruthlessly culling the laggards, Stanley lets a thousand solutions bloom. In one example, his system designed walking gaits for two-legged robots. A convergent search, conducted according to a winner-take-all logic, landed on something akin to human walking. A divergent search returned all sorts of weird stumbling motions. Most were painful to watch, but some contained the germ of a better idea and, with refinement, produced a more efficient gait than one directly modeled on humans.
Stanley sees a life lesson in this and, every time I talk to him, has a new aphorism: "You can achieve more by not trying to achieve anything," "You can solve the problem better by not trying to solve the problem." Not that we should all sit around getting stoned: It's just that to achieve something, we shouldn't be overly focused on that thing. We should not cease from exploring, and end of all our exploring will be to solve the original problem better than ever. This principle is familiar to anyone who has done scientific research or watched the Beatles compose songs, but sometimes gets forgotten in our national obsession with ranking.
In 2019 and 2020, Stanley created a new version of his system, Paired Open-Ended Trailblazer, or POET. Not only do solutions to a problem evolve, but so does the problem itself. He again considered two-legged robots in virtual landscapes, and now it wasn't just their gait that evolved, but also the topography they were adapted to navigate.
[youtube:RX0sKDRq400, 560, 315]
This dual evolution captures an important feature of natural evolution. As species evolve, they alter their fitness landscape; they never converge because the target is moving. In addition, POET transplants robots from one terrain to another. Sometimes they outrun the incumbents, further demonstrating that the best solution for a problem can come from considering some other problem. "You keeping trying out agents who have been optimizing in one task on many others," he said.
Stanley noted that even a divergent search, run for long enough, will eventually become a convergent search, since the space of solutions is finite. Natural creativity, in contrast, expands the very space it is searching. Machine systems can't claim to be truly creative until they, too, can do that.
When Two Wrongs Do Make a Right
The importance of language comes naturally to Witkowski. His father is Polish; his mother, Vietnamese. He was born in Brussels, studied in Valencia, Spain, and Tokyo, Japan, worked in Sheffield, England, and Princeton, New Jersey, and now lives in Kyoto. He did his Master's thesis on translating Incan encoded ropes, or quipus (an Incan quipu from the Larcan Museum collection, in Lima, Peru, is pictured below, right, by Claus Ableiter). "I developed this passion for languages," Witkowski said. "My father, too. We always language-switch to tell jokes." He has spent much of the pandemic working on a book about how communication shapes thinking.

Claus Ableiter nur hochgeladen aus enWiki
In a sense, it's obvious that agents might be able to achieve more by communicating. They can pool information, combine efforts, and avoid duplication. What's less obvious is that communication shouldn't be too easy. In their scenarios, Witkowski and his colleagues deliberately constrict the bandwidth or inject noise into the agents' messages and find that the agents get better at their collective task. "We made it really difficult, and actually that made it easier for them to pass on helpful information," he said.
Throwing sand in the gears of communication plays several roles. It prevents the equivalent of groupthink, in which the agents move in lockstep and stop exploring diverse alternatives. It breaks the agents out of ruts and diversifies the solutions they consider. And it forces the agents to be selective in the messages they send and to send them in a format--a language--the others can make use of. They can't just dump everything they know, but have to be selective. That, in turn, gives structure to the agents' individual deliberations. Witkowski thus supports the emerging view that bottlenecks are essential to cognition.
Witkowski evocatively describes dialogue as "constructive misunderstanding" in which one person says something, the other person misinterprets it somewhat, the first person reacts to the misinterpretation, and so it goes. In my experience, this is how physics conferences usually go. A speaker gives a talk that condenses a large volume of work, but is still largely incomprehensible. Over lunch and coffee breaks, people rephrase what they thought they heard. It is like a children's party game of telephone, except that the message doesn't get mangled along the way, but becomes even sharper. In so doing, researchers refine the original idea and suggest new avenues. Thus communication is not just about transferring knowledge, but actively creating it. "Maybe there is a secret value to miscommunication," Witkowski said.