Expanding the Mind (Literally): Q&A with Karim Jerbi and Jordan O’Byrne
Using a brain-computer interface to create a consciousness ’add-on’ to help test Integrated Information Theory.
by Logan Chipkin
August 20, 2022
Consciousness remains one of science’s most perplexing puzzles, in part because it is extremely difficult to test theories for its origin. But with a
grant from FQXi of over $88,000, psychologist
Karim Jerbi, of the University of Montreal, in Quebec, and his graduate student,
Jordan O’Byrne, are developing a novel experiment using a brain-computer interface to provide an ’add-on’ piece to a participant’s consciousness. If it works, it will provide support for a leading consciousness model: Integrated Information Theory.
What is Integrated Information Theory, and what problem does it aim to solve?KJ: Integrated Information Theory, or IIT, aims to solve the problem of the physical basis of consciousness. Specifically, IIT is an answer to which kinds of physical systems are able to produce consciousness, and which would not. It’s a mathematical formalization of the age-old idea that the whole is more than the sum of the individual parts. When a complex network like the brain shuttles activity between neurons, it generates a certain amount of information. That means that the activation state of the network at a given moment constrains the space of possible states that the network can assume in the next moment. IIT proposes an appealing hypothesis about the neural basis of consciousness, specifically the idea of integrated information across neurons or populations of neurons. By having these parts exchange information, the overall output is more than just the sum of the contributions of each module. This is at the core of our proposed experiment.
JOB: IIT is about how the causal interactions in the network constrain the past and future of the network. The network in its current state tells us about what its past state could have been and what its future states could be. And that probability distribution is a kind of information. But in order to produce the unified whole of consciousness, this information also has to be integrated. It’s a nice concept in that it’s intuitive and mathematically well-formalized—but it’s a shame that we can’t measure it so well.
Why has IIT been historically so difficult to test?JOB: There are two big hurdles. The main measure that was developed is called
phi, and it’s a measure of the integration of the information. Broadly, if the amount of information created by the whole network is greater than that created by any bipartition or "cut" of the network into two parts, then the network can be thought of as an informationally integrated whole. Phi is the amount of information created by the whole over and above the sum of the parts. As the theory goes, whenever phi is greater than zero, you have consciousness, and the amount of phi tells us the level of consciousness. But to measure phi in a system, you need to record the activity of all of its nodes, and that’s just not possible for something like the brain. Even if we had access to all of them, the second hurdle is computational. In order to calculate phi, you have to make every possible cut to the network in order to see if the whole is indeed greater than the sum of its parts. And that’s a super-exponentially exploding number of cuts when the size of the network gets large. For a network the size of the brain, the computation time required would be astronomical.
People have developed alternative measures of phi, with interesting results. One example is the perturbational complexity index or PCI, developed by
Marcello Massimini and colleagues, which measures the complexity of the brain’s response to a transcranial magnetic perturbation. FQXi’s director
Max Tegmark also reviewed and refined a collection of other alternative integration measures. Unfortunately, none of these measures strictly measure phi, and so a direct test of IIT is still out of reach.
How do you and your colleagues plan to work around this difficulty? The idea here is to connect the brain to a second component that is outside the brain.
- Karim Jerbi
KJ:We want to test predictions that follow from the IIT framework in a creative and audacious way: Instead of focusing on information processing in the brain, we will hook up the brain to a computer creating a brain-computer interface that will allow for real-time neuro-feedback. Participants in the experiment will be presented with changing visual stimuli on a computer screen. The trick is that the visual stimuli that will be fed to the participant will depend on the ongoing activity in their brain. So it’s real-time manipulation of visual information that is conditioned by the activity of the subject’s brain. When your brain activity changes, the information being presented to you changes. This creates a closed loop.
The idea here is to connect the brain to a second component that is outside the brain.
How will you change the stimuli based on the individual’s brain activity?KJ: It is nowadays possible to use brain signals recorded during visualization of complex visual stimuli to train an algorithm to decode or "guess" what the subject was looking at. We intend to embed this technology into a neurofeedback loop.
In principle, you can imagine a setup where we show you an ambiguous stimulus, like fractals. You might start to see a face in these fractals, just like when you see a face on the moon or in the clouds. Your brain activity will be decoded by our algorithm, and then we will modify the visual stimulus so that you actually see a face. In other words, we aim to create a tight link between your mental imagery and the way the visual stimulus is being changed. This is how we envision creating a coupling between the brain and an external device.
MEG ScannerUniversity of Montreal How does this help you test IIT?JOB: We’re basically trying to ’add’ a piece to the brain—or more specifically, to consciousness—and IIT gives us a recipe on how to do so. It involves designing an external device (the brain-computer interface) capable of a high-speed, high-bandwidth, two-way exchange of information with the brain. If we succeed in getting these parameters right, then IIT predicts that the brain-computer interface will, as far as consciousness is concerned, become indistinguishable from just another part of the brain, because it is exchanging information with the rest of the brain in an informationally integrated way. This kind of thing doesn’t normally happen in everyday life because our usual closed-loop informational exchanges with external objects or people, like playing a video game or talking to a friend, are not closing the loop fast enough or are not exchanging information in a complex enough way.
KJ: By manipulating the extent to which the brain and the generated stimuli are coupled, you eventually cross a threshold that allows you to test the expectation that you generate a change in the conscious perception of the subject. That is going to be our measure. Through this coupling, can we bring about a perceptual change of conscious experience for the individual?
Can a computer process really be considered part of a person’s consciousness, even if connected through a brain-computer interface?JOB: Our hypothesis is that the subject won’t know the difference, so to speak, between their brain and the closed loop that we’re exposing them to, because it will be just like any other loop in the brain that integrates information. To make that loop we can’t just record brain signals. We need to present information
back to the brain in the language of the brain. For that we need decoding/encoding models, which we will borrow from the active field of neuro-AI.
KJ: If you’re engaged in VR or a video game, at some point, your conscious experience includes the virtual reality. This may be experienced as a state of what’s called ’flow,’ such that you become one with the game. In this state, gamers tend to perform very well and surprisingly with minimal effort, and they tend to be experts. That is a certain level of conscious experience that is different from when you’re just learning to play the game.
This is roughly what we’re trying to generate by having the visual environment be conditioned by the signals unfolding in real time in the brain. The example of the video game does not directly depend on your brain activity, of course, this is just an analogy. But in our experiment, we’re actually using real-time decoding of the brain state and brain representations and translating that back to the participants. The expectation is that this will have an impact that the participants will remark on in their conscious experience. That would follow from the IIT theory.
Do the participants then report back verbally about what they see?JOB: Yes that’s right. Self-report remains one of the most reliable measures of consciousness we have. We expect that turning on the neurofeedback loop will create a noticeable change to the subject’s visual conscious experience, which they can then tell us about. We will thus have participants openly describe their experience, and have them fill out a questionnaire used in psychedelic studies for describing abnormal experiences. We will also replay the video that the subjects created while they were in the loop, and ask them to rate the similarity of the video with what they experienced at the time. If the experiment works, there should be a very noticeable difference between the video perceived in the loop and outside the loop. We will of course monitor changes in brain activity throughout the experiments as a complementary objective measure alongside the participants’ self-reports.
What technologies are you planning to use in the experiment?KJ: One important feature that makes our project novel and different from many other projects—beyond the fact that we’re using state-of-the-art artificial networks to try and generate basic brain activity—is our focus on the use of MEG, or magnetoencephalography. This involves measuring the magnetic fields created by synchronized populations of neurons. As a technology, MEG provides us with a very high spatiotemporal resolution. Our lab is one of the leaders in MEG research. MEG is still underexploited in research looking at the neural basis of consciousness.
So could the proposed test confirm or rule out IIT?KJ: We used IIT to build a hypothesis, and this hypothesis is something we can empirically test. If we measure changes in consciousness or conscious experience, then that would probably provide a strong support for IIT, though not a proof. If we do not measure such changes, then there will still be two possible explanations. Either the theoretical basis of IIT will fall under question, or our implementation was simply too restricted.
I don’t think that we’ll be able to rule out IIT nor prove it 100%. But depending on the outcome, the experiment that we’re proposing might have to be taken into account when discussing and debating IIT or other theories of consciousness.
Do you think you will find support for IIT?KJ: Although we have published work on consciousness, we consider our lab to be a newcomer to the field of consciousness neuroscience, in the sense that we don’t have a biased position towards one theory or another. We see different opportunities from these theories to generate hypotheses that we can test with MEG and the type of techniques that we develop in the lab.
Our agnosticism allows us to innocently ask, "Well, that sounds exciting. Let’s develop a test and see whether or not it pans out." We are not driven to prove that IIT or any other theory of consciousness is correct. We take a fresh look at the questions. We’re interested in what the theories might predict and testing them. That’s what we’ll be doing over the next few years.