This essay proposes a novel mechanism for quantum processing in the brain, suggesting that proton tunneling (the Grotthuss mechanism) within Neurotubules creates a shielded environment for quantum states. We hypothesize that the classical chaos observed in neural networks, crucial for memory and retrieval, is a macroscopic manifestation of underlying quantum chaos. Consequently, we propose using von Neumann entropy as an indirect measure of this quantum activity. Our theory suggests that neural networks operating near the "edge of chaos" would exhibit higher von Neumann entropy and greater resilience to noise. We propose a path for experimental investigation by combining quantum-enhanced artificial neural networks with comparative psychology methods to identify testable signatures of quantum effects in the brain.
Quantum physics is essential for the stability of atoms and molecules, including the macromolecules vital for life. But does it play a more direct role in the unique characteristics of living matter? The answer is yes.
Take human vision, for example. It relies on quantized energy to convert light into electrical signals through a process called phototransduction. This begins when a photon interacts with a chromophore, a light-absorbing chemical, within a photoreceptor. The chromophore absorbs the photon and rapidly changes shape in a process called photoisomerization, which is described using quantum mechanics.
This change in shape then initiates signal transduction pathways that lead to a visual signal. The key molecule in this process is rhodopsin, found in the retina's rod cells. When a photon strikes rhodopsin, the absorbed energy causes an extremely fast (femtosecond timescale) and efficient (around 70% quantum efficiency) change in the retinal molecule's configuration, ultimately enabling us to see [1].
The human eye offers a prime example of how quantum mechanics underpins a characteristic biological function (vision), enabling its remarkable speed and efficiency. While the quantum nature of vision is well-established, new questions arise when we follow the visual signal beyond the eye, into the brain's visual cortex: how does the brain store and retrieve information, and could quantum mechanics be involved?
Addressing these complex questions requires answering several fundamental ones: How can we quantify the correlations between quantum features, complexity, and entropy in living systems? How can we better define the complexity of biological systems? And what novel tools can we develop to explore these correlations effectively?
The answers to these questions will not only illuminate the potential role of quantum mechanics in how the brain stores information but also provide tools to investigate other potential quantum aspects of brain function. This essay aims to demonstrate a quantum mechanism in brain information storage, show how mathematical chaos can act as a proxy for quantum activity in complex systems, and propose that neuroscience and comparative psychology methods can reveal potential quantifiable indicators of quantum activity within the brain.
To begin answering these questions, we must understand what is happening from the point of view of contemporary neuroscience. As is currently understood, information within the human brain is encoded in complex neural networks, distributed networks of neurons that strengthen connections with repeated exposure [2]. The brain stores this information by altering the strength of these connections within a given network. This is known as the Hebbian model of neural networks and has been the foundation for understanding synaptic plasticity, one of the mechanisms behind learning and memory formation [2].
For example, after an image is processed in the visual cortex, the brain sends the information to the posterior parietal cortex and the inferior temporal cortex. The inferior temporal cortex then encodes the image with spatial information and object representations [3].
This processed information is then sent to the prefrontal cortex, which uses a type of neural network called a recurrent neural network to sustain the neural activity representing the image. A recurrent neural network is different from other neural networks because it uses recurrent connections, or loops, to feed the output of a previous moment back into the network as an input [4]. This allows the network to have an internal state, which is necessary to store the image's information. The prefrontal cortex stores this information using a specific kind of recurrent neural network called a Hopfield network.
A Hopfield network is a recurrent neural network where all neurons are interconnected, excluding self-connections [5]. These networks operate on a feedback principle: they take an input pattern and iteratively adjust neuron states until they settle into a stable state, which corresponds to a stored pattern. This process is governed by an energy function that the network minimizes, enabling Hopfield networks to function as content-addressable memory [5]. This means they can retrieve a complete pattern from a partial or noisy input based on its content rather than its location.
When an image pattern in the prefrontal cortex is deemed significant (i.e., it sufficiently activates other neural networks), it undergoes consolidation into long-term memory. This involves a shift in network activity: the pattern initially fades from the prefrontal cortex and reemerges in the hippocampus for initial memory formation. Over time, as consolidation progresses, activity in the hippocampus diminishes, and the pattern reappears in relevant parts of the neocortex, where it can later be retrieved [5].
Given the complexity of everything we have seen happening in the brain up to this point, it is logical to ask, “How did we ever figure this all out to begin with?” Much of our understanding comes from observing "spike code," the patterns of electrical impulses that neurons use to communicate and encode information [6]. When it comes to forming memories, neurons exhibit spike-frequency adaptation. This means a neuron's firing rate initially peaks with a new stimulus but then gradually slows to a more stable pace. This self-regulation at the single-neuron level is crucial for memory formation. The exact mathematics behind the modeling of this phenomenon is beyond the scope of this essay, but those interested in the exact mathematics should see Spike Frequency Adaptation in Neurons of the Central Nervous Systems for further details.
While spike-frequency adaptation provides a basic understanding of short- and long-term memory at the individual neuron level, the brain's true computational power comes from neural networks. As Charles Sherrington eloquently put it, the brain acts like an "enchanted loom" where countless neurons interact to create dynamic, meaningful, yet constantly shifting patterns. This collective activity leads to phenomena that are far more complex than anything seen in a single neuron.
When we look at neural networks together, we see that the brain's capacity for processing information, learning, and adapting relies heavily on complex nonlinear dynamics [7]. This means that things like neuron firing, the strength of their connections (synapses), and the feedback loops within networks (especially those for memory) don't follow simple, predictable rules. Instead, their relationships are non-linear, allowing for incredibly rich and varied activity. Because of these complex interactions, neurons in a network don't just give simple responses; they exhibit intricate patterns sensitive to even tiny changes in input [7].
This is crucial because a "memory" isn't stored in just one neuron. Instead, it emerges from the collective activity and dynamic states of an entire network of neurons. This kind of emergent behavior is a fundamental characteristic of systems governed by nonlinear dynamics [7]. Another key feature is the presence of attractors (stable states that a system tends to settle into). From a mathematical perspective, memories can be thought of as these activity patterns within the network that firing neurons converge toward over time. This high level of complexity in neural networks, particularly those involved in memory, also means they can exhibit mathematical chaos [8].
Here, chaos does not simply mean randomness or disorder but rather refers to how deterministic systems that are extremely sensitive to their initial conditions tend to become less predictable over time, to the point of apparent randomness [8]. This sensitivity allows the brain to distinguish between similar stimuli. For example, the feeling of moving fast could trigger different memories (like finishing a race or running late for an interview) based on subtle sensory differences [5]. These chaotic dynamics also help neural networks explore many possible states, allowing them to find optimal solutions, form new memories, and avoid getting stuck in repetitive patterns [5]. This also helps decorrelate existing memories, making it easier to encode new information without interference, and improves the brain's ability to detect new situations [5].
While short-term memory involves temporary chaotic states, the process of forming long-term memories involves adjusting these chaotic dynamics to create more stable yet still flexible memories. The brain's ability to shift between chaotic and ordered states is crucial, allowing it to operate at the "edge of chaos” [8]. This is an optimal state for complex systems, balancing predictability and unpredictability. If a neural network is too ordered, it struggles to learn and adapt. If it's too chaotic, it loses the ability to form stable memories, and information can be lost [8].
Now that we have seen how contemporary neuroscience describes long-term and short-term memory, we are left with the question, “How does quantum mechanics figure into all of this?” The short answer is we don't know. Indeed, there is good reason to be skeptical that it plays a role at all. Quantum states typically exist in isolated systems near absolute zero. The brain, however, is warm (310K) and wet, filled with constant, chaotic activity [8]. This environment causes decoherence, where quantum systems lose their quantum properties due to interactions with their surroundings.
Calculations show that a quantum state in the brain would decohere incredibly quickly (on the order of 10^-13 to 10^-20 seconds), much faster than the milliseconds that neural activity occurs in [8]. And since we already have a well-established set of laws that govern the behavior of neural networks (nonlinear dynamics) and applying Occam's razor (the hypothesis that makes the fewest assumptions is usually the correct one), it is highly doubtful that quantum mechanics plays a role in either short-term or long-term memory.
So, what leads us to believe that quantum mechanics plays a role in the brain at all? We can look at other biological systems for clues. The extreme sensitivity of the eye was one reason researchers started wondering if quantum mechanics influenced vision [1]. Another example is photosynthesis in plants, which is remarkably efficient. Almost every photon hitting a leaf is used to create energy.
The human brain is similarly impressive. It runs on a mere 20 watts of power [9], while a large language model on a single high-end graphics card can consume between 250 to 400 watts [10]. Beyond its low power consumption, the brain can process and retrieve huge amounts of information almost instantly, seemingly faster than standard neural signals should allow [8]. While some of this speed comes from the brain's parallel processing, its overall processing speed still seems to defy explanation. This is where quantum computing offers a compelling model. Quantum computers use principles like superposition and entanglement to perform calculations in parallel, potentially explaining the brain's extraordinary computational power.
Although the brain's remarkable efficiency and speed hint at quantum phenomena, nonlinear dynamics can also explain these capabilities; it is also better established. The observant reader will be quick to point out that we still have not resolved the problem of decoherence; however, neurotubules, tiny hollow structures within brain cells, could offer a solution. Neurotubules might form a lattice structure that allows for vibrational modes (phonons) to create a protective shield, thereby preserving quantum information [11] and allowing protons to synchronize with these vibrational modes for greater stability [8,11]. However, it’s important to note that this is only a theoretical possibility, and there is currently no definitive proof that neurotubules function in this manner[11].
However, say we can suspend our skepticism for a bit and assume this quantum shield exists, one way the brain might utilize quantum mechanics is through the Grotthuss mechanism, which describes how protons can "jump" from one water molecule to another, forming a "proton wire" via quantum tunneling [8]. Quantum tunneling allows protons to instantaneously appear on the other side of an energy barrier, even without sufficient classical energy to overcome it.
Because a proton's position is described as a superposition of its position and momentum, its location can act as a qubit, such that a proton's position on molecule A could represent a 0 state, and a 1 state when on molecule B. Enzymes like triosephosphate isomerase, which facilitate proton movement between specific molecules, could theoretically control these quantum operations, performing functions similar to classical logic gates [8,11].
Thus far, we have examined known quantum subsystems in biology and the current neuroscientific understanding of the brain; we've also examined some possible indicators of quantum activity within the brain and proposed a hypothetical quantum mechanism. However, it’s crucial to remember that our hypothetical idea is just that, hypothetical. Our next step is to figure out how to experimentally investigate quantum phenomena in the brain, especially since their presence isn't yet confirmed.
The answer to this question comes in the form of artificial neural networks. Neuroscientists use these networks to model brain function, training them on tasks and then comparing the model's activity to real brain data [12]. We can adapt this method by creating "quantum-enhanced" neural network models that incorporate simple quantum circuits (by using the Qiskit api). By training these models on tasks the brain performs, such as pattern recognition, and observing their behavior [13], we can identify what markers of quantum activity to look for in the human brain.
This approach lets us test hypothetical quantum mechanisms. However, a significant challenge remains: How can we directly measure the brain's quantum features, and more broadly, quantify the correlations between quantum characteristics? To resolve these challenges, we must first understand what we're measuring: quantum coherence. We can do this by using a quantity called the relative entropy of coherence, how different a quantum state is from one with no coherence [14]. Simply put, this measurement is the difference between the quantum information (von Neumann entropy) in a system's incoherent parts and its coherent parts [14]. Applying this to our earlier example of proton transport through water channels, we can begin to define our specific quantum system.
To model proton transport through a water channel, we first define the system, including the water molecules and any proteins involved. We then define discrete "incoherent basis" states representing the proton's possible location (e.g., on molecule A or B). Next, we construct the Hamiltonian, a mathematical description of the system's total energy, including its kinetic and potential energy, interactions, and tunneling between sites.
Once the Hamiltonian is defined, we determine the proton's quantum state by solving the time-dependent Schrödinger equation. We use an incoherent basis where diagonal elements represent the probabilities of finding the proton in specific localized states. From these probabilities, we then calculate the Relative Entropy of Coherence to quantify the system's quantum coherence. Scaling this model to the vast number of water channels in the brain is computationally prohibitive using traditional perturbative theories. Therefore, an alternative approach is needed, such as analyzing the spread of chaos within the network to understand the system's behavior on a larger scale [15].
While classical chaos describes deterministic systems highly sensitive to initial conditions, its direct application to the probabilistic and linear nature of quantum mechanics isn't straightforward. Quantum chaos explores how classically chaotic characteristics manifest in quantum systems, particularly in the semiclassical limit [16]. Unlike the unpredictable trajectories of a classical chaotic system (like a billiard ball in a table with an irregular shape), quantum chaos manifests in the wave patterns and energy levels of a quantum system (like water waves in a pond in the same irregular shape as our billiards table), where statistical properties reflect the underlying classical chaos. Essentially, quantum chaos is correlated with classical chaos, rather than being an exact parallel.
In the context of the brain, the quantum aspect of a neuron's function is linked to the quantum coherence of a proton moving along a water channel within a neurotubule. This coherence is measured using the relative entropy of coherence, a quantifier that indicates how "far" a quantum state is from being classical. Notably, other research suggests that measures of quantum chaos, such as delocalization in phase space, can be expressed as quantum coherence measures [16]. This implies that coherence itself can be a diagnostic tool for chaotic behavior in quantum systems; chaotic quantum systems tend to have highly delocalized eigenstates indicating significant coherence, while non-chaotic systems exhibit more localized eigenstates and less coherence [16].
Interestingly, while one might expect decoherence to suppress quantum chaos, some studies suggest that in certain open quantum systems with balanced energy exchange, quantum chaos can be enhanced alongside increased coherence [17]. This leads to a fascinating hypothesis: the classical chaos observed in neural networks, which are crucial for long-term and short-term memory, might be correlated with the emergence of quantum chaos in our proposed quantum mechanisms for these memory processes. Therefore, studying the spread of chaotic behavior in the brain over time could serve as an indirect means to investigate the behavior of any underlying quantum mechanisms within the brain.
Now we have a hypothetical structure within the neuron that we can use to build quantum gates within the brain. To understand the potential role of quantum gates in memory, it is necessary to first understand how information is processed within neural networks. Three internal "gates" (the input, forget, and output gates) filter information based on the network's current state and new input [18,19]. These gates collectively establish memory patterns in the prefrontal cortex for eventual long-term storage in the neocortex.
Integrating quantum gates into these networks requires understanding variational quantum circuits. These circuits encode classical data into adjustable quantum gates. The gates are then optimized to minimize a cost function and produce the desired output. This process essentially breaks down into five steps:
Classical Input: Classical data enters the system [19].
Quantum Encoding: The data is encoded into quantum states [19].
Initial State Update: The initial state of the gates is updated [19].
Quantum Gate Operations: The gates perform operations on the data [19].
Classical Decoding: The processed quantum states are decoded back into classical data [19].
Simulations show that quantum-enhanced networks can achieve optimal performance with fewer parameters and iterations than standard neural networks. They also exhibit greater robustness [19].
This robustness provides a testable hypothesis for quantum activity in the brain's memory processes. Recalling our previous discussion, the relative entropy of coherence is a measure of quantum coherence and, if classical chaos in the neural networks responsible for memory correlates with quantum chaos, as it does in our proposed quantum mechanism, and the brain operates at the edge of chaos, then neural networks closer to this edge should have higher theoretical relative entropy of coherence and thus be more resistant to disruption. Currently, no studies have measured the relative entropy of coherence within the brain’s neural networks or correlated network robustness with proximity to the edge of chaos.
Although we haven't definitively shown there is a quantum mechanism at work in the brain's long-term and short-term memory, we have found a physical process that would indicate the presence of one. We've established that the relative entropy of coherence could be used to measure quantum coherence in the brain, given that quantum coherence is an indicator of quantum chaos and quantum chaos is correlated with classical chaos. This suggests that neural networks approaching a chaotic limit might serve as a proxy for quantum activity within the brain.
Furthermore, combining neuroscience methods like artificial neural networks with quantum computers offers a powerful set of tools for exploring how the brain would behave with quantum aspects. We have shown that if the brain utilizes a quantum mechanism for long-term and short-term memory, the responsible networks should be more resistant to noise. If chaos indicates relative entropy of coherence, and relative entropy of coherence measures coherence, then memories formed in neural networks operating near the edge of chaos should be more disruption-resistant than those without a quantum mechanism.
1. Hecht, S., Shlaer, S., & Pirenne, M. H. (1942). ENERGY, QUANTA, AND VISION. The Journal of General Physiology, 25(6), 819–840. https://doi.org/10.1085/jgp.25.6.819
2. Morris R. G. (1999). D.O. Hebb: The Organization of Behavior, Wiley: New York; 1949. Brain research bulletin, 50(5-6), 437. https://doi.org/10.1016/s0361-9230(99)00182-3
3. Zhaoping, L. (2014). Understanding vision: Theory, models, and data. Oxford University Press.
4. Bitzer, S., Kiebel, S.J. Recognizing recurrent neural networks (RNN): Bayesian inference for recurrent neural networks. Biol Cybern 106, 201–217 (2012). https://doi.org/10.1007/s00422-012-0490-x
5. Amit, D. J., Amit, D. J. (1992). Modeling Brain Function. United Kingdom: Cambridge University Press.
6. Ha, G. E., & Cheong, E. (2017). Spike Frequency Adaptation in Neurons of the Central Nervous System. Experimental neurobiology, 26(4), 179–185. https://doi.org/10.5607/en.2017.26.4.179
7. Taylor J. G. (1994). Non-linear dynamics in neural networks. Progress in brain research, 102, 371–382. https://doi.org/10.1016/S0079-6123(08)60553-1
8. Sbitnev, V. (2024). The edge of chaos is where consciousness manifests itself through intermittent dynamics. Academia Biology, 2(1).
9. Raichle, M. E., & Gusnard, D. A. (2002). Appraising the brain's energy budget. Proceedings of the National Academy of Sciences of the United States of America, 99(16), 10237–10239. https://doi.org/10.1073/pnas.172399499
10. Shekhar, S., Dubey, T., Mukherjee, K., Saxena, A., Tyagi, A., & Kotla, N. (2024). Towards optimizing the costs of LLM usage. arXiv preprint arXiv:2402.01742.
11. The Emerging Physics of Consciousness. (2006). Germany: Springer Berlin Heidelberg.
12. Yang, G. R., & Wang, X. J. (2020). Artificial Neural Networks for Neuroscientists: A Primer. Neuron, 107(6), 1048–1070. https://doi.org/10.1016/j.neuron.2020.09.005
13. Voudouris, K., Cheke, L. & Schulz, E. Bringing comparative cognition approaches to AI systems. Nat Rev Psychol 4, 363–364 (2025). https://doi.org/10.1038/s44159-025-00456-8
14. Bengtsson, I., Życzkowski, K. (2017). Geometry of Quantum States: An Introduction to Quantum Entanglement. Singapore: Cambridge University Press.
15. Grabarits, A., Swain, K. R., Heydari, M. S., Chandarana, P., Gómez-Ruiz, F. J., & del Campo, A. (2025). Quantum chaos in random Ising networks. Physical Review Research, 7(1), 013146.
16. Anand, N., Styliaris, G., Kumari, M., & Zanardi, P. (2021). Quantum coherence as a signature of chaos. Physical Review Research, 3(2), 023214.
17. Tudor Patrascu, A. (2025). Quantum Coherence and Chaotic Dynamics: Guiding Molecular Machines Toward Low-Entropy States. arXiv e-prints, arXiv-2505.
18. Andrés, E., Cuéllar, M. P., & Navarro, G. (2025). Brain-Inspired Quantum Neural Architectures for Pattern Recognition: Integrating QSNN and QLSTM. arXiv preprint arXiv:2505.01735.
19. Andrés, E., Cuéllar, M. P., & Navarro, G. (2024). Brain-Inspired Agents for Quantum Reinforcement Learning. Mathematics, 12(8), 1230.