Discussion about this post

User's avatar
BTernaryTau's avatar

The problem with choice dominance is that it implicitly assumes that the CDT method of assessing counterfactuals is correct. If you instead use the FDT method of assessing counterfactuals, for example, then choice dominance would favor one-boxing in both Newcomb's problem and the Transparent Newcomb Problem. So really, a better name for choice dominance would be CDT choice dominance, or causal choice dominance. Since FDT/functional choice dominance also exists, you do not need to give up on choice dominance in order to support one-boxing; instead, you can simply change which version of choice dominance you use. Thus, this post's defense of two-boxers is much weaker than it first appears.

Note that the above is essentially just a restatement/summary of an argument from the paper Functional Decision Theory: A New Theory of Instrumental Rationality rather than something original to me.

https://arxiv.org/pdf/1710.05060#page=19

Expand full comment
NotPeerReviewed's avatar

I'm fairly certain that polarization on this issue is not actually related to disagreements about decision theory, but rather to different intuitions about whether the idea of a "oracle who can see the future" is coherent and how one should engage with thought experiments that seem unrealistic. The hypothetical situation is usually formulated in such a way that one-boxing is "correct" by definition. But it's extremely difficult for me to engage with the problem in that way, because it runs counter to the ways I think causality and human psychology work.

If I find myself in a situation where I feel like I have a psychologically "live" choice as to whether to one-box or two-box, my sense of human psychology is that it probably means the factors influencing the choice are too chaotic for anyone to predict what I will choose. So if I am asked to imagine myself in that situation, how do I do that? Do I imagine a scenario where someone has successfully convinced me that the oracle is infallible, which also means they have convinced me to abandon my basic intuitions about causality and psychology? It seems like I have to do that, but how could I possibly imagine what I would do in such an alien scenario.

I realize that, the way the scenario is typically described, one-boxing is the "correct" answer. But I feel more emotional affinity for the two-box answer, because my temptation is to respond to scenario itself by thinking "screw the assumptions of this stupid rigged scenario."

edit: I just noticed Kaiser Basileus's comment below. Yeah, that.

Expand full comment
29 more comments...

No posts