In Newcomb’s Problem, what information does the predictor get if you flip a coin? Here’s some options:
- "Bob has 50:50 odds of one-boxing or two-boxing"
- "Bob will one-box" (chosen randomly with the same odds as your decision)
- "Bob will one-box" (guaranteed to be the same as your decision) I think (1) is a poor formalization, because the game tree becomes unreasonably huge, and some strategies of the predictor (like "fill the box unless the probability of two-boxing is exactly 1") leave no optimal strategy for the player. And (3) seems like a poor formalization because it makes the predictor work too hard. Now it must predict all possible sources of randomness you might use, not just your internal decision-making. That leaves us with (2). Basically we allow the predictor to "sample" your decision at any information set in the game. That means we can add some extra nodes to the player’s information sets and get rid of the predictor entirely, ending up with a single player game. Newcomb’s Problem and Counterfactual Mugging can be easily analyzed in this way, leading to the same answers we had before (one-box, pay up). It also gives a crisp formalization of the transparent boxes Newcomb problem, where we sample the player’s decision at the information set of "seeing both boxes filled". I think this might end up useful for bounded versions of Counterfactual Mugging, which are confusing to everyone right now. But also it feels good to nail down my understanding of the simple version.
To me the answer depends on what kind of predictor it is. Some options below, in the order of increasing power. a) Mind reader: Can it predict only Bob’s mind and not the rest of the world? Then it’s 1 before the flip and 2 after, since in this case the predictor’s power is the same as Bob’s. That’s the classic way to draw the game of rock-paper-scissors against someone who can read your mind. b) Laplace demon: Can it accurately predict the movements of Bob’s coin-flipping hand, the currents in the air, and other local classical factors that go into figuring out how the coin lands? Then it’s 3. Unless Bob’s coin has quantum randomness, then it’s 2. c) Demiurge: Can it accurately predict the whole of the universe because it has seen it run from the Big Bang to heat death, and now is simply replaying the tape? Then it is 3. Also, any good links to the current state of research into " bounded versions of Counterfactual Mugging, which are confusing to everyone right now"?
Comment
Yeah, see this post and the links from it for examples. I don’t understand most of it, to be honest. For this post I was just trying to figure out the cleanest theory.
Comment
Had a brief look and gave up. Maybe my math skills are not up to snuff, or the inferential distance is too large, or both.
I just came up with another argument why 2 is nicer than 1: the predictor can use 2 multiple times and get knowledge that’s similar to 1, but without the infinite precision that makes 1 problematic.
As for 3, it can also be translated into single player game theory by doing another kind of modification on the game tree, and gives the same answers as 2 in NP and CM. But 2 is more in line with the rest of game theory where players can have private randomness, and it’s what you get when you replace predictors with amnesia or Nash equilibrium prediction.
Like Shminux, I say: Why not all three? You’ve just very helpfully pointed out that there are different ways in which someone can be predictable. So going forward, when I contemplate decision theory hypotheticals in which someone is predictable, I’ll make sure to specify which of these three kinds is in effect. Usually I’ll consider all three kinds, to see if it changes the results.
As I mentioned elsewhere, I don’t really understand...
Comment
I guess I just like game theory. "Alice chooses a box and Bob predicts her action" can be viewed as a game with Alice and Bob as players, or with only Alice as player and Bob as the shape of the game tree, but in any case it seems that option (2) from the post leads to games where solutions/equilibria always exist, while (1) doesn’t. Also see my other comment about amnesia, it’s basically the same argument. It’s fine if it’s not a strong argument for you.