Should we worry if we are weird?

May 30, 2007
by Anthony Aguirre

I've just finished reading an interesting paper by Hartle and Srednicki critiquing the assumption that 'we are typical', used in various cosmological model-testing arguments.

Here is the basic issue. There is an open problem in cosmology as to how to test a theory that entails a 'multiverse', which is to say an ensemble of regions, each member of which appears as a 'universe' to its inhabitants (i.e. appears larger than their horizon), but across which the observable 'cosmological' observables may vary. As we can measure only one set of such values, how do we test the model that gave rise to the ensemble? There are a number of rather different ways people have proposed to look at this.

My basic take on this is outlined here and here . In brief, I like to think of asking the question: "Suppose I were a randomly chosen X. What would I observe?" Here, 'X' might stand for 'universe' or 'point in space' or 'observer', etc. For each such 'X', we might try to calculate the answer in the theory. Then we would have to make a basic "philosophical" assumption that the probability distribution for a randomly chosen 'X' is actually closely related to the probability that we will in fact see some particular thing when we go out and look. After making this assumption, if we measure something that would be very, very unusual for a typical X, we then conclude that either (a) our observations are not closely related to those of a typical X, or (b) the theory is incorrect.

I've found that most methods people advocate fall somewhere on what might be seen as a 'spectrum' of conditionalization. On one end of the spectrum -- least conditioning -- we might 'count vacua' (in the string-theory landscape). On the other end we might 'condition' on all possible facts at our disposal, and try to predict something we have not yet seen. So a lot of the disagreement, as I see it, centers around what 'X' to pick, and in my book there are arguments in favor (and against) just about every choice.

For example, assuming that we are 'typical humans' (out of all that ever have and will exist) leads to the 'doomsday paradox' , and even stranger variations on it. Or, we might choose X='Observers just like me who know everything that I know' (which I've seen referred to as 'top-down' reasoning, or 'Full non-indexical conditioning'). This sounds compelling: it's what we do in lots of experimental physics -- that is, we do not worry about *how* a particular experimental setup came to be, but rather what will happen *given* everything about that setup. But in a cosmological context, I contend that using this sort of reasoning, we can never rule a theory out, which is bad. I present my argument simply in the form of a dialogue, then in more detail in the attachments below.

But back to Hartle and Srednicki, they are not quite advocating any particular 'X', but rather trying to see exactly what Bayesian reasoning tells us to do. In particular, given two theories A and B with 'prior' probabilities P_A and P_B, we should take all of the data D at our disposal, compute the 'probability of D given A' P(D|A), and the 'probability of D given B', P(D_B), then use Bayes theorem to find that the (relative) probabilities of theories A and B will be P(D|A)P_A/P(D_B)P_B.

Reasoning this way has some nice consequences. For example (and this seems to be a strong motivation behind their paper) if we don't assume that we are a 'typical' anything, then I think we do indeed remove the 'Boltzmann's brain' problem , along with the closely related 'doomsday' problem. A 'side effect' is that we should not in any way prefer a theory in which we are 'typical': if the data D occurs in two theories, we should not accord any preference to one that creates many more instances of D.

To me, this sort of reasoning does have some compelling qualities, but it also seems to me that it would be extremely weak in discriminating between cosmological theories. For example, suppose A is 'the big bang theory', and B is 'a 500 kg ball of gas in a box that exists forever.' It seems as though theory A does, indeed, give rise to our observations D. But the box, if it lasts long enough, would *also* give rise to them, in the form of, yes, a 'Boltzmann brain' that fluctuates thermally into existence in just such a way as to think it has measured all of the data D. On the basis of Bayesian inference, if we were to accord equal prior probability to these, we could not any further discriminate between them, since P(D|A) ~ P(D|B) ~ 1. This seems rather displeasing, as the data D arises very naturally in A, but in an exceedingly strange way in theory B. (I wonder if Hartle & Srednicki may have been grappling with this, as late in the paper they mention 'counting' and similar ideas as things that we might want to take into account in our prior probabilities. But then this is just burying the crucial point in a different place.)

So in my mind, the question of how we can reason in 'multiverse' cosmology in a way that (a) actually allows us to effectively discriminate between models, but (b) does not lead to any weird paradoxes, is still very much open.