Discussion about this post

User's avatar
BTernaryTau's avatar

The problem with choice dominance is that it implicitly assumes that the CDT method of assessing counterfactuals is correct. If you instead use the FDT method of assessing counterfactuals, for example, then choice dominance would favor one-boxing in both Newcomb's problem and the Transparent Newcomb Problem. So really, a better name for choice dominance would be CDT choice dominance, or causal choice dominance. Since FDT/functional choice dominance also exists, you do not need to give up on choice dominance in order to support one-boxing; instead, you can simply change which version of choice dominance you use. Thus, this post's defense of two-boxers is much weaker than it first appears.

Note that the above is essentially just a restatement/summary of an argument from the paper Functional Decision Theory: A New Theory of Instrumental Rationality rather than something original to me.

https://arxiv.org/pdf/1710.05060#page=19

Expand full comment
H.'s avatar

Maybe I'm honest, or maybe I'm simple, but I think people *correctly* predict each other all the time in everyday life. Label the boxes "excellent employment" and "*not* embezzling everything in front of you" and I think the common-sense case for one-boxing in *transparent* Newcomb's becomes much clearer.

The big prize box is visibly full because you were predicted to be the kind of person who could consistently leave the small prize alone. The big prize is visibly *empty* because -- well, as one of the MIRI folks put it (Soares, I think?), decisions are for making bad outcomes inconsistent. It *isn't* empty, not for me. And when it is, despite that? I'll take the 1% failure rate and go home empty-handed, over turning around and becoming a two-boxer *in every situation like this*.

Evidence and causation are right there, you described the entire setup in a single paragraph. It just is not that complicated. You do need something that can look spooky to sophilists -- a *logical* (not causal) connection between your actions and other people's models of your actions -- but it's not hard to define. If CDT ignores that kind of relation and therefore predictably loses money, so much the worse for CDT. I think this is only complicated to philosophers who've tied themselves up in knots over it. Given the setup there is an action that reliably and predictably wins, and if you can't philosophically justify taking it, that is a failure in your philosophy. *Justifying* the tendency to lose, or waffling between them -- that's just sad. You can do better. Your is/ought intervention just denies the setup to sidestep the problem, and the setup can be patched until it doesn't (assume you want money, assume linear utility in dollars, assume prizes other than money, etc etc etc until you *have to* engage with the question, Least Convenient Possible World style.)

I dunno, you're right that the dominance principle is not to be discarded lightly. But I think I can safely do so, given that maximizing the outcomes for the situation I'm in *affects the situations I end up in* when other people can *see you doing it* and contribute to your situation. I don't think it has to be complex.

("But there's no way people could" go play smash or street fighter or competitive pokémon with your local ranked champions, you will get your *soul read* like your innermost plans are a cheap billboard ad. Everyone who believes they have a libertarian / unpredictable free will should try this some time. Mixing it up is a skill that takes time and practice and can't be perfect.)

("But I don't *want* to embezzle/steal/exploit" yes evolution and culture have had you internalize this, in everyday situations far away from abstract philosophy.)

Expand full comment
26 more comments...

No posts