The sleeping beauty problem runs as follows. You are going to be put to sleep for a week, and woken up in a specific room either once or twice during that week, depending on the results of a coin toss. If it lands heads you will be woken twice, if it lands tails you will be woken once After each time you wake up, your memory will be erased when you are put back to sleep.
You find yourself woken up. What probability should you assign that the coin landed heads?
On one hand, it might seem like the probability should still be 1/2, since whichever way the coin landed you would, at some point, be woken up with no memory of having been woken up previously. You were going to have this observation regardless.
On the other hand, it might seem like the probability should be 2/3 that the coin landed heads. Why? Because you will have twice as many observations of yourself woken up without memory of the past if the coin landed heads, hence it’s twice as likely conditional on heads than conditional on tails. An elementary application of Bayes theorem would seem to indicate that we should change our probability assignment to 2/3 likelihood of heads upon finding ourselves woken up with no memory.
I knew of the problem vaguely but had never looked into it. At a conference this week someone explained the problem to me, and I immediately had the thought that the many world’s interpretation of quantum physics gives us the clue we need to solve it. In excitement, I looked it up on my phone planning to publish a paper on it, and…. found someone else had already figured that out.
Here’s how the many world’s interpretation of quantum mechanics solves the problem, or rather, makes the solution obvious. Instead of deciding via a coin flip, let’s imagine we decide via some observation of a particle that has a 50/50 chance of doing something. Assume this means, per the many worlds interpretation (very loosely understood) that two worlds branch off, and in one world you wake up twice, and in the other world, you wake up once.
Now let’s say you’re having the observation of yourself waking up. Is it more likely you’re in the two branch or the one branch? Well, since there are twice as many “making the observation I’ve just woken up with no memory” moments in branch two to branch one, it’s now intuitively much more obvious that you’re probably in the 2 branch.
Why is this? Well, I’m not sure, but I have a reasonable guess- it’s previously been demonstrated that humans are much better at thinking about probability problems when they’re turned into problems of frequencies (this is probably why the false view of probability- frequentism- was so popular for so long. By turning this into a frequency problem through quantum mechanics, we’ve made the situation easier to understand.
Here’s another way to make the problem about frequency. Imagine that you’re at the Institute for sleeping beauty studies, and thousands of beauties, beaus and gender non-specific beautiful people are put through this experiment every week. There are two types of people- people who always say 2/3 and people who always say 1/2. Note that overall, the people who say 2/3 every time they are woken up will be in the heads condition upon that waking event… two-thirds of the time.
I want to chuck another conjecture out there. I suspect that individuals who have a very strong experience of self will probably be more likely to buy into the idea that you shouldn’t update your probability to two-thirds. On the other hand, individuals who have a relatively weak experience of self who see themselves just as a bundle of experiences and memories will be more likely to update to two-thirds. Why is this? Because the latter can look at things from the point of view of the probability of being one of three effectively separate beings having the experience, whereas the former will be overwhelmed by the sense that *I* will have this experience regardless.
Here’s a series of posts going into detail on anthropic problems and analyzing different approaches to them: https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you
This series presents lots of interesting anthropic situations, and (if you like thinking about this kind of thing) should be quite fun to read.
Hello Ursus Philisophicus,
I just wanted to say I really appreciate your newsletterblog. It's always interesting, and almost always intelligible even to an idiot like myself.
Also, I love that picture of the sleeping bear! AI I assume?
Thanks