--------------------------------------------------------------------------------------------------------------------------
Introduction
While there are many versions of Newcomb’s Paradox, a typical bare formulation is along the following lines: there are two boxes in front of you. One, a transparent box, has $10,000 in it that you can see, and the other, an opaque box, has either $0 or $1,000,000 in it. You can choose to take both boxes or you can choose to take only the opaque box. However, a very successful Predictor has made a prediction as to whether you will take the opaque box only (one-box) or both boxes (two-box). In accordance with its prediction, it has placed $0 in the opaque box if it has predicted that you will two-box, or it has placed $1,000,000 in the opaque box if it has predicted that you will one-box. You know all of this information. Do you one-box or two-box, that is, do you take the opaque box only, or do you take both boxes? [1]
Initially, it may seem obvious that you should one-box. If the Predictor is highly successful, it will accurately predict what you do. If you one-box, the Predictor will have placed $1,000,000 in the opaque box and you will receive $1,000,000. If you two-box, the Predictor will have predicted this and you will only get $10,000 because the opaque box will be empty. Knowing that $1,000,000 is more than $10,000, it seems the rational thing to do is to one-box. However, further reflection leads us to doubt this conclusion. After all, the Predictor has already made its decision and placed (or not placed) money in the opaque box accordingly. Whatever choice you make, it appears that you cannot now influence whatever the Predictor did. If the Predictor placed $0 in the opaque box, you should take both boxes and so gain the visible $10,000. If the Predictor placed $1,000,000 in the opaque box, you should still take both boxes, and so gain $1,000,000 in addition to the visible $10,000. So either way, you should take both boxes, and therefore, the rational thing to do is to two-box.
Thus, we appear to have a conflict of rational principles. On the one hand, we can calculate the expected utility of one-boxing versus two-boxing, and one-boxing yields more expected utility. [2] Since it is rational to “act so as to maximize the benefit you can expect from your action,” it is rational to pick only the opaque box (Sainsbury 70). Following this principle of Maximum Expected Utility (MEU), we should one-box. On the other hand, two-boxing dominates one-boxing. No matter what the Predictor has done, you will always have $10,000 more if you choose both boxes instead of just the opaque box. [3] So you should simply two-box, in accordance with the dominance principle: “whenever there is one option that is better than all others regardless of how the relevant variables turn out [you should choose that option]. … [T]hat dominant option is the rational option” (Burgess, “Conditional” 320). It is thus rational to pick both boxes. Consequently, it appears to be rational to do opposing things.
In response to this conflict, some philosophers have rejected the problem as a legitimate paradox. These philosophers advocate a third alternative: no-box. The no-box solution claims that the problem is ill-formed, being either underdetermined or overdetermined, leading to either ambiguity or internal contradiction. Consequently, “the problem is… merely a pseudo-problem[,]… needing not to be solved but to be dissolved” (Slezak 279). As such, no solution is possible and so neither one-boxing nor two-boxing can be supported. In this paper, I outline these no-boxing arguments and consider possible responses. I conclude that Newcomb’s Paradox is not underdetermined or overdetermined in its most plausible specification, and that, consequently, the ‘no-box’ solution fails. That is, the paradox is a legitimate, though still puzzling, problem.
Methodology
No-boxers claim that the paradox is either underspecified or overspecified, leading to either indeterminism or incoherency. In order to assess these claims, we must settle on a methodology regarding our approach to the problem, particularly regarding the underspecification claim. There are two things to consider here. First, the paradox’s underspecification is a fault only if it is supposed to be fully specified. However, perhaps the problem is asking us to consider what it is rational to do in the face of uncertainty. As Burgess observes, “we all commonly make both decisions and predictions without anything like a complete specification or description of the circumstances involved” (Burgess, “Conditional” 324). So perhaps we are supposed to make a decision without further specification. For example, Kavka suggests the possibility that
a generalization could be directly confirmed by a large number of observations, yet not fit into any known theory. Such a generalization might assert the existence of relationships that are impossible or enormously improbable according to present theory. Or it might entail specific predictions contradicting those of established theory… The strong confirming evidence indicates we should accept the generalization as a genuine causal one on which we can rely for predictions. (Kavka 272-3)Supposing that we were actually faced with a Newcomb choice in real life, it might be rational to abandon present scientific theory and assume that there is a causal relationship between the chooser’s choice and the Predictor’s prediction. We would not know what sort of causal relationship this is or how this is possible, but maybe this is precisely the situation that Newcomb’s paradox is asking us to consider. Consequently, we might agree with Bar-Hillel and Margalit that “though we do not assume a causal relationship, there is no better alternative strategy than to behave as if the relationship was, in fact, causal” (Bar-Hillel and Margalit 302-3). On this basis, Bar-Hillel and Margalit opt for one-boxing.
Bach offers a similar approach that disregards the need for specifying the mechanism by which the Predictor makes its prediction. He notices that any argument for one-boxing, whatever its logical merits, will likely lead you to $1,000,000. Similarly, any argument for two-boxing, whatever its logical merits, will likely lead you to $10,000. These are based on uncontroversial facts: those who one-box almost always get $1,000,000 while those who two-box almost always get $10,000. And “if there is any argument for [two-boxing] that has reliably led to a payoff of $1M + $[10]K, you have no idea what it is” (Bach 414). Consequently, one should one-box. So there is no need to further specify the problem, for we can answer the problem as it stands. In fact specifying the problem further may change the problem to be solved. So we must first ask: are we permitted to specify the problem further, where necessary?
I do think that we are permitted to further specify the problem. Bach may be right that we do not need to further specify the problem in order to make a rational decision. However, the problem does not explicitly forbid us from further specifying the problem or pondering the nature of this situation. [4] In fact, the problem, as a philosophical paradox, seems to invite us to consider how this situation is possible. We are supposed to engage in deep reflection upon it, and such reflection is unavoidable. We cannot help ruling out certain possibilities (e.g., magic) if we take this problem seriously, so implicitly we are already further specifying the problem. Thus, we must be permitted to do so.
Also, I think we must further specify the problem. Perhaps the only reasonable specifications will lead us to one-box, in which case, no harm has been done by further specifying the scenario; the specification will have been superfluous. However, legitimate specifications may lead us to differing conclusions, in which case, how we specify the problem will be extremely important. Furthermore, no-boxers take the problem to contain or entail an internal inconsistency or an unreasonable assumption that is hidden by a refusal to further specify the problem. In order to show that the problem does not contain an internal inconsistency and is not merely a science-fictional fantasy, one must attempt to specify the problem in a consistent way. [5] Doing so will either prove that the problem is a pseudo problem, or it will vindicate those who defend the paradox’s legitimacy.
Having established that we are permitted to and must further specify the problem where necessary, we must consider the second issue of how to specify the problem. That is, we must ask: when and how are we permitted to further specify the problem? What constitutes a legitimate or charitable specification? Some specifications are clearly not legitimate or charitable. For example, one could say that it is irrational to two-box because the paradox does not say that the transparent box is not hooked up to a nuclear device that will destroy a major U.S. city. Without ruling out that possibility, we cannot say that it is rational to two-box. Similarly, if one were to only take the opaque box, perhaps a 300 pound anvil would fall on your head and kill you, so it is irrational to one-box as well because the problem does not say that this is not a possibility. Thus, one can easily hypothesize wacky scenarios that yield one result or the other and perhaps neither.
In order to rule out these and other illegitimate specifications, we must provide criteria for legitimate or charitable specifications. I believe that the relevant alternatives approach to skepticism in epistemology is a fruitful starting place. We may take a specification to be legitimate and charitable only if it presents a possibility that is relevant. A relevant possibility is one that we have a reason for believing to be true. This means that the possible specification should only be considered if it is explicitly suggested or necessarily (or perhaps reasonably) implied by the context or bare formulation of the problem. Another way of saying this is that the specification must represent a possible world that could charitably be taken to be the actual world (or a world very similar to the actual world). Thus, only charitable and reasonable specifications should be considered as potentially posing a problem for (or solution to) the paradox. If only one reasonable specification survives our discussion, then the solution it proposes should be the solution to the problem. If none or many survive (and these lead to different conclusions), then the paradox has no general solution.
The Bare Formulation
Before turning to the arguments for and against no-boxing, we must understand the essential bare conditions that constitute the paradox in its most general form. These are the following:
1. There are two boxes: the transparent box will have $10,000 in it that will be visible to you; the opaque box will have $0 or $1,000,000 in it that will not be visible to you.
1*. There are two boxes: the transparent box has $10,000 in it that is visible to you; the opaque box has $0 or $1,000,000 in it that is not visible to you.
2. You can choose both boxes or only the opaque box.
3. If the Predictor predicts that you will take the opaque box, it will place $1,000,000 in the opaque box; if it predicts that you will take both boxes, it will place $0 in the opaque box.
3*. If the Predictor predicted that you will take the opaque box, it placed $1,000,000 in the opaque box; if it predicted that you will take both boxes, it placed $0 in the opaque box.
4. The Predictor is highly successful in making predictions.
Notice that the bare formulation already contains an ambiguity, implicit in the literature, highlighted by the use of 1 and 1*, and 3 and 3*. The ambiguity pertains to when the Predictor makes its prediction: does it make its prediction before or after you are presented with the choice? 1 and 3 correspond to the situation in which you are confronted with the choice before the Predictor has made its prediction. 1* and 3* correspond to the situation in which you are confronted with the choice after the Predictor has made its choice.
Why does this matter? Most versions of the problem in the literature explicitly assume that the Predictor has already made its prediction when you are presented with the choice. So the contents of the boxes are already fixed. However, many one-boxing arguments implicitly assume to the contrary that the prediction will take place after you have been presented with the choice. This leads to disagreement because whether you should one-box or two-box greatly depends upon when you are confronted with the choice with respect to the Predictor’s prediction.
Burgess, to his credit, is one of the first to notice the two stage nature of the problem. He writes that “the first stage covers the period before the Predictor has gained the information required for his prediction, the second stage covers the period beyond” (Burgess, “Unqualified” 262). In accordance with the two-stage nature of the problem, Burgess argues that you should two-box in the second stage but one-box in the first-stage. In the second stage, the prediction has already been made and so you cannot now influence the contents of the boxes, so you should take both boxes. However, in the first stage, the prediction has not been made yet, and you can influence the Predictor, the prediction, and the contents of the boxes. [6] For example, if you can make the Predictor believe that you will one-box, then the Predictor will put $1,000,000 in the opaque box. (Burgess, “Unqualified” 280). Since the Predictor is unlikely to be fooled, you should fully commit to one-boxing and therefore one-box when you get to the second stage.
Burgess believes we are all in the first stage. He writes that “we regular earthlings can all consider ourselves to be in the first stage already. Sad though it is to say, it seems doubtful that many of us will be fortunate enough to … proceed to the second stage” (Burgess, “Unqualified” 280). However, he does not argue for this conclusion, and in fact, most formulations of the problem suggest that you are actually in the second stage when you are confronted with the choice. [7] If this is true, then you probably cannot influence the prediction or the contents of the box, as many no-boxers and two-boxers typically claim, and so it is probably rational to two-box. [8] Indeed, Burgess himself claims that “if you are in the second stage you should two-box” (Burgess, “Unqualified” 280). He writes more fully:
When in the second stage,… there is nothing that you can do to affect your [brainscan]. Deciding to one-box would not make your [brainscan] that of a one-boxer, and nor would it make the $[1,000,000] somehow materialize in the opaque box. In that stage, two-boxing is a dominant option and also therefore the rational option. The rationality of two-boxing is also reflected in the fact that when the conditional expected payoff values are calculated correctly, that for two-boxing is $[10,000] greater than that for one-boxing throughout the deliberation process. So again: the dominance principle and the conditional expected payoff values are in perfect harmony; both confirm that two-boxing is the rational option. (Burgess, “Conditional” 336)
Thus, the problem as usually formulated uses 1* and 3* instead of 1 and 3, and so I will use 1* and 3* and assume that your choice is made in the second stage.
The bare essentials of the paradox involve 1*, 2, 3*, and 4. Yet this information does not seem to be enough to make an informed and rational decision to one-box or two-box. Importantly, we do not know (a) how accurate the Predictor is, (b) how often the Predictor has made similar predictions, nor (c) how the Predictor makes its predictions. We shall take each of these in turn.
No comments:
Post a Comment