Schrödinger asked why life resists entropy. I argue the minimal condition is not a substance but a way of updating: local decisions that reference their past, preserving identity through change. In lattices, randomness evaporates into noise and global majority rules freeze; only short-memory (non-Markov) updates sustain long-lived, reentrant patterns with capacity for inheritance. I propose three tests—(i) a lattice comparing random, majority, local, and two-step-memory rules; (ii) a quantum walk with a history-dependent coin; and (iii) a reaction–diffusion system with optical feedback from recent history. Metrics—survival time, temporal mutual information, compressibility, and reseeding—make the claim falsifiable. If memory lengthens pattern life, we gain a common denominator; if not, the question sharpens.
“Life seems to be an organized form of complexity that persistently resists entropy. But what, exactly, is that resistance?”
— inspired by Schrödinger
When Erwin Schrödinger published What Is Life? in 1944, he wasn’t offering biologists yet another definition of an organism. He was after something deeper—a principle that separates what endures from what dissolves in the constant whirl of entropy. His thought was simple and, at the same time, revolutionary: life cannot be fully explained by the laws of chemistry and statistics. There must be another order, subtler than molecular mechanics, that lets biological structures maintain continuity despite the relentless pressure of chaos.
Today, eighty years on, Schrödinger’s question is still open. Molecular biology has become a vast edifice: we know genome sequences, we can edit DNA, we can design synthetic cells—yet the basic question remains: why do some systems persist instead of vanishing at the very moment they come into being? Why, out of uncountable possible molecular combinations, do only a few cohere into stable wholes capable of sustaining their own identity?
Classical answers are operational. Definitions of life list traits: metabolism, replication, adaptation, homeostasis. But every such list assumes the form already exists. It is like analyzing a poem that has been written without asking where the language itself came from. Schrödinger had the courage to ask something prior: what makes any stable form possible in the first place? How quantum is life?—to the extent that quantum dynamics permits short-memory update rules that preserve intermediate complexity after perturbation; the experiments below test exactly this.
This question is more radical than it seems. It is not about boundaries between human and machine, or between the biological and the synthetic. It marks a subtler frontier—between what disintegrates immediately and what, if only for a moment, holds together. That thin line decides whether anything at all can exist as a process rather than a flash.
So in this essay I propose a different perspective: to view life not as a set of functions but as a condition of continuity. I am less interested in how something works than in why it exists at all. In other words—what must happen for the first decision to persist?
This shift moves us beyond biology and chemistry. It forces us to ask: what are the minimal rules that let a form continue itself? Can they be stated in the language of physics and information? Can they be tested in simulation or in the lab? And might they be the common denominator of biological life, artificial life, and—perhaps—of the Universe itself?
For decades, textbooks have repeated almost the same catalogue: life is a system with metabolism, the capacity for self-replication, adaptation, and homeostasis. Sometimes the list is longer, sometimes shorter, but it always rests on one assumption: life is what does certain things. The description seems satisfying—until we ask what happens before those things begin to occur.
Consider a single RNA molecule in a primordial ocean. Biochemistry tells us it can catalyze reactions and, in favorable conditions, even replicate itself. Yet until it engages with its environment in a way that allows it to maintain continuity, it is just a contingent arrangement of atoms. In other words, before metabolism or adaptation can show, there must be a first step of persistence.
A similar limitation appears in astrobiology, where life is often defined as a “self-sustaining chemical system capable of Darwinian evolution.” The definition is useful for laboratory models and mission design. But it too assumes that something already lasts long enough for “self-sustaining” and “evolving” to make sense. It does not answer the question: why did this reaction not extinguish immediately?
We see the same pattern in the boldest artificial-life projects. We build cellular networks and design digital organisms that evolve in simulated environments. We observe impressive phenomena—pattern formation, replication, differentiation. Yet all of this begins by assuming a system that already possesses a mechanism allowing it to persist across iterations. The very condition of non-vanishing is treated as obvious, when in fact it is the essential point.
Definitions of life are therefore secondary. They are practically useful—they let us distinguish an organism from a mineral, a cell from a crystal. But philosophically and physically they are insufficient. They confine the question of life to a catalogue of functions and avoid the question of how persistence can appear at all. This limitation is like looking at a fully grown tree and saying, “a tree is an organism because it has roots, leaves, and photosynthesis.” True—but none of those criteria explains how the seed became a plant in the first place, rather than dying at the very first step.
If we truly want to answer Schrödinger’s question, we have to shift the center of gravity. Not ask about life’s functions, but about its boundary condition: what ensures that a given form does not vanish at once?
Let us, then, look inside Schrödinger’s wave function and imagine a world without particles, atoms, or cells. There is no time, no space, no energy. There is only a process of updating—the possibility of assuming one of several states. Each point in this abstract structure can, at any moment, choose what it will become next.
It is a world of pure decisions. There is no metabolism, no genetic code, no environment imposing constraints. There are only rules telling how one state should follow another.
In the simplest version, picture a network of points, each of which, at every iteration, takes one of three states: A, B, or C. What determines the choice? That depends on the rule.
Consider four variants:
System A: Randomness.
Each point chooses a state entirely at random at every step. The result? After a few ticks the whole looks like noise. Fleeting patterns appear and vanish. Nothing lasts; nothing leaves a trace.
System B: Local coherence.
Here a point chooses its new state to be consistent with its previous state and with its immediate neighborhood. In other words, each fragment “remembers” where it came from. Suddenly the scene changes. Where there was noise, rhythms and continuities arise; structures travel across the system and persist for many iterations. Something begins to resemble organization.
System C: Global dominance.
Each point chooses whichever state is most common in the entire system. At first this seems promising: dominant patterns stabilize quickly. But soon everything homogenizes and settles into a single, motionless state. We get durability, but without life—static and incapable of development.
System D: Harmonious memory.
Now each point refers not only to the previous step but also to the one before. Introducing a two-step memory makes structures more intricate. Cycles appear, repetitions, rhythms like a primordial breath. Some forms endure, some decay—but many begin to continue themselves in striking ways.
What does this thought experiment show? That persistence does not arise in a world of pure randomness. Nor is it born of dominance alone, which leads to stagnation. It appears only where local decisions take their own past into account—where a minimal principle of coherence is in force.
We are not yet speaking of biological life. We are speaking of a boundary condition: a logic that turns a flash into a trace. In that abstract space the first possibility of life emerges—the capacity of a form to remain itself despite changing conditions.
In the thought experiment we saw something simple: forms built on sheer chance die out at once, and those built on global dominance freeze in place. Endurance appears only where a local choice refers to its own history. It is a surprisingly simple rule: to exist, one must remember oneself.
In this sense, life can be described as a logic of continuity. Not as a list of biological traits, but as a rule by which each update does not sever the tie with what came before. Life does not begin with metabolism or with a genetic code. It begins with the first act that says: remain yourself through change.
Moreover, that continuity is not stagnation. In the memory-based system we did not get a dead, uniform field. We saw rhythms, cycles, spiral trajectories—forms that developed rather than merely persisted in stillness. Persistence, then, is not the opposite of change. On the contrary: it is the skill of changing while preserving identity.
This leads to the next step: inheritance. If a form not only endures but can also pass on the rule of its endurance—by replication or by rhythm—then continuity becomes independent of a single instance. Here, evolution is born, though not yet biological: evolution as the selection of rules that can maintain and reproduce themselves.
One might say that life begins where the capacity for repeatable continuity appears—where a form not only avoids extinction but generates further forms capable of the same act of non-vanishing. That is the minimal criterion of life: earlier than metabolism, earlier than DNA, earlier even than matter’s particles.
From this perspective, life is not something one simply has or does not have by definition. It is a trajectory—a mode of updating that can keep on continuing itself. And it is precisely this trajectory that can link disparate levels of reality: from the simplest informational systems, through biological organisms, to cultural processes and consciousness.
Life as a logic of continuity reframes the fundamental question. Not “Is this alive?” but “Can this persist while respecting its own past?” The answer marks the boundary between what dissolves into chaos and what opens the possibility of existence.
When Schrödinger asked “What is life?”, he noted that biological structures seem to counter the second law of thermodynamics. Organisms maintain order in a world that tends toward disorder. His intuition went deeper than mechanisms; it suggested an underlying principle of selecting for continuity.
In the language of contemporary physics, the natural place to seek it is quantum theory, where possibilities must somehow become facts. The wave function charts a spread of options; collapse selects one and discards the rest. At first glance, this appears purely random—one throw of Nature’s dice. And yet, must it be so? Is every collapse equivalent? Do all updates have the same status? Perhaps not. Perhaps some statistical choices are one-off events, while others carry something more—the possibility of continuation. If so, life would not be one of the outcomes of collapse, but a particular form of collapse that endures because it respects its own history.
Put differently: the wave function models the range of potentialities, but it says nothing about what happens within that range before selection. It assumes statistics but does not examine the structure of the updating act itself. Precisely there—in the hidden interval between possibility and fact—may lie Schrödinger’s sought-for principle: the one that separates a mere event from a continued one. Schrödinger’s “negentropy” becomes concrete here: short-memory updates reduce predictive entropy with respect to a system’s own trajectory and raise temporal mutual information across steps.
Here the logic of the simple thought experiment meets physics. What appeared as patterns, rhythms, and cycles in the A/B/C world could, in the quantum world, show up as a difference between collapses that vanish at once and those that yield persistence. In other words: life is not a substance or a chemical process. It is a form of updating.
This viewpoint does not compete with biology or chemistry; it operates on a different level. Biology asks how organisms function; chemistry asks how molecules react. Schrödinger’s question was earlier: what allows any form to persist at all? That question opens a door for physics to describe life not as a catalogue of traits but as a mechanism of continuity inscribed in the logic of reality itself.
For a hypothesis to matter, it must be testable. If life is continuity forged by decisions that reference their own past, we should see predictable differences in how systems behave. I therefore propose three experiments—two primary and one auxiliary—that probe the same intuition across distinct domains.
The first is computational. Imagine a lattice in which each node takes one of several states at each step. We can define different update rules: purely random, global-majority, locally coherent, or ones that include a two-step history. The metrics are clear: the lifetime of emerging structures; temporal mutual information between successive steps; compressibility of spatial “frames” over time—a proxy for intermediate complexity (neither uncompressible noise nor trivial uniformity—signals that remain moderately compressible and thus structured); and the ability of a rule to reseed itself into fresh domains. The prediction is unambiguous: in purely random or globally homogenizing systems, persistence does not arise. Structures endure only when local decisions refer to their own past. It is a simple, falsifiable test: if memory does not improve survival and does not enhance reseeding of the rule, the hypothesis fails. In plain terms: does a short-memory rule create patterns that re-form after we erase a patch?
(By “non-Markov” I mean that the choice at time t depends on at least one earlier state—t–1 or t–2—not solely on the present.)
The second experiment carries this logic into quantum physics. Quantum walks are a well-known tool: a particle moves on a lattice, and the step direction is set by the state of a “coin.” In standard implementations the decisions are memoryless, but one can design a variant in which the coin operator takes previous steps into account. Then the questions are clear: after many iterations, does the particle’s spatial distribution become more ordered; do characteristic interference features persist longer; does temporal mutual information between successive distributions increase? Those are the metrics. The prediction is equally clear: adding memory should enhance the durability of patterns and extend the lifetime of interference structures. The falsification criterion is just as crisp: if the memoryful walk is indistinguishable from the standard one, the hypothesis loses support. In plain terms: does a history-dependent coin keep the distribution’s “posture” recognizably longer than a standard coin?
A third variant—more traditional but visually striking—is a reaction–diffusion system with optical feedback. In Belousov–Zhabotinsky–type reactions, waves and patterns spontaneously appear and usually fade quickly. If, however, the system is coupled to a camera and projector so that the stimulus depends not only on the current state but also on a short history of recent frames, we can run a test analogous to the previous ones. The metrics are the same: lifetime of structures, their repeatability, the ability to reappear in fresh domains. The prediction: memory-based rules should increase durability and regularity. The falsification: no measurable difference between a Markov setup and one with memory. In plain terms: does feedback from recent frames extend pattern lifetime and ease re-emergence after interruption?
The three proposals form a coherent set: the computational model reveals minimal conditions for persistence, the quantum variant takes the principle to the foundations of physics, and the reaction–diffusion system shows that a memory rule can be observed directly in the lab. In each case the criterion is the same: does a form last longer when it takes its own past into account? If it does, we gain empirical support for Schrödinger’s intuition that life begins not with chemistry but with a principle of continuity.
If the core of life is maintained continuity—not static endurance but the ability to return to itself through change—then boundaries and objections matter as much as the thesis. The most serious is this: have I simply rebranded ordinary temporal correlation as “life”? After all, any system with memory “remembers” something. The answer lies in rigor: the memory I seek is not a trace in a recorder but an update rule that preserves structure in the face of fluctuations. Put otherwise: I am not interested in the fact that the state at time t correlates with the state at t–1; I am interested in the rule of choice for the next state containing a condition of consistency with its own trajectory. Memory so understood is not an ornament; it is an anti-vanishing mechanism.
This is not wordplay. In pure diffusion, temporal correlation is a by-product of inertia; in the systems I am describing, correlation is chosen. One can crank up external disturbance by changing environment, parameters, or geometry, and the system—if it satisfies the continuity condition—will still return to a recognizable pattern. Survival here is not the resultant of kindly conditions; it is the consequence of how the system computes its own future.
A second objection concerns thermodynamics. Schrödinger wrote of “negentropy”: life “feeds on” order. Does my proposal ignore that fundamental accounting? On the contrary—it grounds it more deeply. Instead of listing what the system exchanges with its environment, I ask how it decides on exchanges that keep it recognizable. A system that minimizes uncertainty about its own trajectory within a short memory prefers transitions that increase predictability for itself. In familiar physical terms, it reduces predictive entropy relative to its own history, not the world’s global entropy. This matches intuition: a cell does not “defeat” the second law; rather, it chooses local actions that push disorder outward from the mechanism of its continuity.
A third objection goes to the heart of the approach: am I smuggling in a new ontology under the cover of elegant words? No need. Everything essential can be stated in contemporary physics: amplitudes, phases, distributions, operators, feedback, non-Markovian processes. In that idiom, the “life-giving” class of rules is simply the one that introduces short-term memory into the update mechanism so as to prefer changes consistent with the existing pattern. At a formal minimum: such rules increase mutual information between states separated by more than one step while maintaining intermediate complexity (neither noise nor dead homogenization).
At this point the boundary between a “model of life” and a “model of quantum behavior” begins to blur—and that is the point. Schrödinger asked how it is possible that something does not fall apart at once. Today we can indicate where an answer may live: in the structure of phases and feedbacks that constitute not archival memory but decisional memory. In the simplest variant, it is a choice rule in which the update at time t depends not only on the current input but also on a short history of its own states. In a more ambitious one, it is an operator whose parameters are modulated by an earlier interference imprint—which, in practice, means a quantum coin with memory or a reaction–diffusion system with optical feedback from history. Not a single new entity—only a shift of emphasis.
Where would this principle take us beyond the laboratory? First, to astrobiology. If life is a way of maintaining continuity, we should not look for its signs in sheer complexity but in the time-profile of complexity: in patterns that return to themselves after being perturbed. In data from distant atmospheres, protoplanetary disks, or moon surfaces, more promising than an “exotic composition” may be a dynamic signature: moving-window compressibility of the signal, jumps in mutual information between temporally separated states, characteristic recovery times after disturbance. One does not need biochemistry to see the logic of persistence.
A second direction leads to engineering. Systems that “remember their shape” can be designed not by stacking layers of control, but by tuning the update rule: coupling to a short history of state is often cheaper and more robust than a precise PID loop. A material that “knows” how to return to its own conductivity pattern; a network that can rebuild its topology after node loss; an algorithm that not only learns distributions but organizes its motion so as to preserve task identity amid drifting data—these are applications where “life” names a class of dynamics.
The third direction concerns physics itself. If, in quantum systems, short-term memory in an operator can be shown to produce distributions that hold intermediate complexity (neither diffusion nor localization) while resisting external perturbations, then we will have more than a metaphor. We will have a signature that a memory rule organizes interference so that the “shapes” of the distribution can survive. It is a plain description of how a system prefers transitions that keep it recognizable. If the word “intentionality” is distracting, read it as directed decisional memory: a rule that prefers transitions preserving recognizability.
Finally, the matter of definition. Am I proposing to call anything with nonzero mutual information between time t and t–2 alive? No. I propose distinguishing two levels of the question. The biological level stays intact: metabolism, replication, adaptation—the splendid machinery of traits. Deeper lies the condition that makes anything describable in time in the first place: a local rule that selects the continuation of its own pattern. This is not a competing definition of life; it is a pre-definitional criterion, a filter any system must pass before we ascribe living properties to it.
Such a framing relieves us of the pressure to decide, at the first step, whether something is “really alive.” We first ask whether it can avoid vanishing—and how it does so. If it does so by a memory rule that maintains intermediate complexity and can replicate in state-space, we have a candidate. If the rule transfers while preserving identity under perturbations, we enter the realm of inheritance. If the system begins to minimize its own predictive entropy on longer horizons by forming models of itself, we may be touching the seed of what we call cognition. Everything else—biochemistry, ecology, culture—are storeys above a foundation that needs no new entities; it asks for the courage to look at old equations through the lens of decision.
Schrödinger did not know the word “non-Markov.” He wrote about aperiodic crystals, about negentropy, and about life slipping the net of pure statistics. Today we can take a step he could not: we can show that in modest, measurable quantities of memory and complexity lies an answer to his question. “Resistance to entropy” is not a mysterious gift; it is the consequence of local updates that take into account who the system has been when deciding who it will be. Continuity is not a whim of matter; it is a form of motion.
So if we are to search for life where we do not yet see it, we should begin with the lowest bar: does there appear a rule that chooses the continuation of its own pattern? If yes, the rest becomes possible. If not, even the cleverest machinery will fade like brightness in scattered dust.
This perspective does not solve every puzzle. It does, however, tidy the field of play: on one side, random diffusion that forgets; on the other, global dominance that stalls; in between, a path that returns to itself in order to go on. Somewhere along that path life begins—and, more importantly, so does the very sense of asking why anything endures. Here physics and biology cease to be separate islands; they are joined by a bridge of memory, decision, and time—time not given, but invented by a rule that learned not to disappear.
Either short-memory rules measurably lengthen the life of patterns, or they do not—and in both cases we learn exactly how quantum life is.
Let me close where we began: with Schrödinger, who sensed an aperiodic order. In this light, life is not a substance but a way of passing from possibility to fact—an update that does not sever its thread to the past. The proposed trials, from decision lattices to quantum walks, will test whether short memory truly prolongs the life of patterns; if so, they point to a common denominator of vitality; if not, they sharpen the question. The rest will be written by experiment—and if the data ever sing a recognizable return, we may say: life begins where reality learns not to vanish, like a motif that remembers its own rhythm.
Schrödinger, E. What Is Life? Cambridge University Press, 1944.
Cover, T. M., Thomas, J. A. Elements of Information Theory, 2nd ed. Wiley, 2006.
Breuer, H.-P., Laine, E.-M., Piilo, J. “Non-Markovian dynamics in open quantum systems.” Rev. Mod. Phys. 88 (2016): 021002.
Kempe, J. “Quantum random walks: an introductory overview.” Contemp. Phys. 44 (2003): 307–327.
Venegas-Andraca, S. E. “Quantum walks: a comprehensive review.” Quantum Inf. Process. 11 (2012): 1015–1106.
Cross, M. C., Hohenberg, P. C. “Pattern formation outside of equilibrium.” Rev. Mod. Phys. 65 (1993): 851–1112.