Continued From: Part 1
The Predictor: (a) How Accurate is the Predictor?
We know that the Predictor is “highly successful” or “reliable,” but how successful or reliable the Predictor is may make a difference in what we choose to do. For those who are fairly risk averse, only a Predictor with an extremely high reliability will convince them to choose the opaque box and to leave the visible $10,000 on the table. In fact, if someone is desperate enough and must have $10,000, only a 100 percent reliable Predictor (and perhaps not even then) could convince this person to one-box. [9] Thus, a risk-averse person would likely two-box. Conversely, for those who do not mind taking a risk, a lower reliability may be tolerable enough to one-box. Thus, there appears to be no single rational answer as to what someone should do when confronted with this choice. Unless we further specify the problem to fix the motivations and desires of the person choosing (i.e., you), and perhaps fix the reliability of the Predictor to a specific degree, we cannot determine which choice is strictly the rational choice. We would be left with a conditional solution at best: you should one-box if you are in circumstances x, and you should two-box if you are in circumstances not-x.
Interestingly, Burgess notices this problem, and in effect, concedes it. Taking the Predictor to be fallible, he writes that
By committing yourself to one-boxing you deny yourself the chance, however small, of walking away with $1,001,000. Given that committing yourself to one-boxing practically guarantees that you will gain $1[million], this fact is not likely to concern many people. Still, there may be some strange souls who would be concerned. Moreover, because of the possibility of such people, it should be acknowledged that this strategy for the first stage will not necessarily be rational for everyone. Consider the case of someone who, however perverse it may seem, is extremely keen to gain $[1,010,000] rather than 'merely' $1[million]. If sufficiently keen, he would presumably be willing to put the potential $1[million] at risk in order to gain the $[1,010,000]. For such a character, a different sort of strategy in the first stage would be rational… Again, such characters are presumably very rare. The strategy of committing yourself to one-boxing while in the first stage may still therefore be said to be rational for the vast majority of people. (Burgess, “Unqualified” 282)By writing this, Burgess concedes that there is no single rational solution that will apply to everyone, for what is rational will depend on the chooser’s desires and goals. Perhaps this is not such a problem for Burgess if such people are rare and “perverse.” But such people may be fairly common. For example, many people are in debt, and a guaranteed $10,000 would cover their debt completely. In such a case, it seems rational to take the $10,000 instead of risking the possibility of getting $0. Or perhaps someone has a debt of $1,010,000. If anything less than this amount will lead to financial ruin, then it is rational to two-box and hope that the Predictor has made a mistake. Thus, it may not be as commonly rational to one-box as Burgess thinks, and if not, his argument is not as strong as he needs it to be.
In order to address these issues, we need to revise condition 4 to specify the Predictor’s reliability. For simplicity’s sake, we should choose between the following:
4*. The Predictor is highly, but not perfectly, reliable.I think that 4* is the more interesting scenario. If the Predictor is perfectly reliable, then most people agree that you should one-box. In this case, there are only two actual possibilities [10] : either you one-box and gain $1,000,000 or you two-box and gain $10,000, and so you should one-box. Furthermore, a scenario in which the Predictor is perfectly accurate is highly implausible and takes away from the believability that such a situation is actually possible. So 4* [11] seems to be a more charitable interpretation of the paradox.
4**. The Predictor is perfectly reliable.
However, 4* itself contains an ambiguity that needs to be discussed. The predictor can be highly reliable in different ways, and some of these are not legitimate specifications of the problem. Suppose that 90 percent of the population will choose two-boxing. The Predictor can simply predict two-boxing every time and maintain a 90 percent reliability. But this would mean that the Predictor will be 100 percent reliable when a person chooses two-boxing and 0 percent reliable when a person chooses one-boxing. This is not very impressive, and it is also doubtful that the population of choosers would opt for one choice over the other to such a degree that would make this possible. Instead, a better interpretation of the problem is that the Predictor is highly reliable with respect to both choices, that is, it makes a very reliable prediction when choosers choose to one-box and a very reliable prediction when choosers choose to two-box. So 4* should be modified in the following way:
4***. The Predictor is highly, but not perfectly, reliable, with respect to both one-boxing and two-boxing choices.As such, the Predictor’s prediction is sensitive to facts about you individually, and not just facts about choosers in general.
We also need to add another condition to the basic formulation in order to address your aim in making your decision:
5. Your choice to one-box or two-box is based on your desire to maximize the money you acquire. [12]This will rule out taking the $10,000 simply because it is guaranteed. Instead, you are making a choice with the purpose of achieving the maximum payout you think is possible.
However, 5 is not an adequate specification of the your desires and motivations. We are assuming that such desires and motivations will determine the choice that you make. But there are two problems with this. First, you may not be determined by your desires and motivations to act. Supposing that you have libertarian freedom, you are free to act contrary to your desires. However, for the purposes of the problem, I do not think we need to take a stance on whether you are determined to act or if there is indeterminacy in your action. We have already assumed that the Predictor is not perfectly reliable. The error in its prediction can be attributed either to a flaw in the Predictor (e.g., lack of comprehensive information about you) or indeterminacy in your choice. Either way, it is still the case that the Predictor is highly reliable; it rarely makes a mistake. Whether this rare mistake is through its own fault or because you are only heavily influenced, but not determined, by your desires and motivations is something we need not take a stance on. Second, the time frame in which you make the decision is unspecified. Do you make a decision based on your gut reaction, or are you permitted to deliberate about the answer? Because gut reactions are not likely to be well considered, permitting you to deliberate about your choice will increase the likelihood that you reason rationally. Thus, 5 can be revised to be:
5*. Your choice to one-box or two-box is based on (i) your desire to maximize the money you acquire and (ii) based on a long process of reflective deliberation.
With 1*, 2, 3*, 4***, and 5*, we can turn to the next issue.
The Predictor: (b) How often has the Predictor made a prediction in the past?
If the Predictor is reliable but has only made a few predictions, it may be a matter of luck that the Predictor has usually been correct. As the number of previous predictions increases, the possibility that the Predictor is simply lucky decreases such that when hundreds and thousands of previous predictions have been made with great reliability, one can be confident that the prediction is not based on luck. It seems that the original intention of the problem is that the prediction is not lucky; a lucky correct answer is a guess, not a prediction. Furthermore, if it were lucky, then it seems that one should obviously two-box because the Predictor has a 50-50 chance of guessing correctly (though it has been extremely lucky so far), and the two-boxing solution not only dominates but yields more expected utility in this circumstance. Thus, it seems we should specify the problem such that a sufficient number of trials have previously been run so as to convince you that the Predictor is not merely lucky. So we add this condition:
6. You know that the Predictor has made a sufficient number of previous predictions so as to convince you that its predictions are not a matter of luck.But if the prediction is not a matter of luck, then how is it made? We turn now to the final and most controversial and crucial aspect of the paradox.
The Predictor: (c) What is the method by which the Predictor makes a prediction?
The original formulation of the problem does not say how the Predictor makes its prediction, and yet, many people take the “apparent link between one’s choice and the previously determined contents of the second box” to be the “central feature” and cause of the paradox (Slezak 281). Many possibilities have been offered, of which I consider the main three: (i) backward causation, (ii) trickery, and (iii) informed prediction.
Backward causation, while a logical (and perhaps physical) possibility, does not seem to be an appropriate specification of the predictive method. If your decision backwardly caused the Predictor’s prediction, then the Predictor’s prediction is not really a prediction but a report of what actually occurred (though in the future). [13] Furthermore, since we already specified the problem such that the Predictor is not infallible, backward causation cannot explain the Predictor’s success because your choice would always infallibly cause the correct prediction. [14] Finally, backward causation is not a reasonable or relevant possibility in the actual world. The natural laws of the actual world do not include exceptions for backward causation. Thus, backward causation is not a reasonable specification of the predictive method.
Mackie suggests that trickery may be involved (Mackie 217). The scenario is rigged, perhaps so that when you take both boxes, a trap door on the bottom of the opaque box is released and the $1,000,000 disappears before you can open the box. But this and other similar scenarios involving trickery do not seem to be reasonable interpretations of the problem. If there were trickery, then this is not really a philosophical paradox at all, but a carnival game, and not worthy of philosophical discussion. [15] We also have no reason to assume that there is any trickery going on, for it is not suggested by the bare formulation of the problem and it is contrary to the spirit of the problem. Similar criticisms can be made against Mackie’s suggestion that a hypnotist hypnotizes you to make one choice or the other (Mackie 218). This also is not a reasonable interpretation of the problem. There is no real prediction here, nor is there a real choice, for you are being controlled to do one thing or another. Also unreasonable is Mackie’s suggestion that the Predictor might be clairvoyant (Mackie 221). It is doubtful that such a power exists in the actual world, and even if it did, it would be so unusual that you would not be justified in taking the Predictor to have that power. Thus, trickery and anything resembling trickery must be excluded as a genuine interpretation of the problem because they are not reasonable, relevant, or charitable specifications of the problem.
This leads us to the third, and most plausible, explanation for the Predictor’s success: an informed prediction or common cause approach. Mackie suggests that the Predictor could be a psychologist that can tell how you will reason and so can predict what you will choose. [16] Along similar lines, Bach supposes that the Predictor, “gathers detailed information about you and plugs it into a high-powered psychological theory” (Bach 410). Sainsbury claims that “the Predictor bases his decision on general laws, together with particular past facts. These might all be physical, or they might be psychological” (Sainsbury 75). Finally, Burgess believes that the Predictor has information that is gained through the use of a brainscan: “the Predictor uses this information as a basis for his prediction, and then uses this prediction to decide whether to place the $1[million] in the opaque box. Your [brainstate]… can therefore be regarded as a common cause of both your decision and the [Predictor’s] decision” (Burgess 326). In all of these explanations, the Predictor gathers information about you (e.g., your desires, motivations, tendencies, beliefs) and makes a prediction based on this information. Since your decision will also be (largely) based on your desires, motivations, tendencies, beliefs, etc., you will likely make the choice that the Predictor predicted you would make.
I agree with Burgess that the common cause or informed prediction explanation is the only “realistic alternative” that is true to the intentions of the paradox (Burgess, “Conditional” 329). First, there does not seem to be a realistic alternative to explain how the Predictor makes its predictions, as we have already seen. Second, such a method does appear to be realistic in that it is possible in the actual world as we know it. While psychological theories are not yet accurate enough to make extremely reliable and informed predictions about individuals, it is not implausible that in the future, computers with vast amounts of personal data about you will be able to make such predictions. Even now, psychologists and economists can make modest predictions about individual behavior. [17] Thus, we should take the Predictor to make its predictions on this basis. We can therefore add a final condition:
7. The Predictor made its prediction using extremely detailed information about you (e.g., your desires, motivations, tendencies, beliefs) as a basis.
Summary so Far:
This concludes my specification of the problem. [18] To recap, I believe Newcomb’s Paradox to involve the following seven conditions:
1*. There are two boxes: the transparent box has $10,000 in it that is visible to you; the opaque box has $0 or $1,000,000 in it that is not visible to you.
2. You can choose both boxes or only the opaque box.
3*. If the Predictor predicted that you will take the opaque box, it placed $1,000,000 in the opaque box; if it predicted that you will take both boxes, it placed $0 in the opaque box.
4***. The Predictor is highly, but not perfectly, reliable, with respect to both one-boxing and two-boxing choices.
5*. Your choice to one-box or two-box is based on (i) your desire to maximize the money you acquire and (ii) based on a long process of reflective deliberation.
6. You know that the Predictor has made a sufficient number of previous predictions so as to convince you that its predictions are not a matter of luck.
7. The Predictor made its prediction using extremely detailed information about you (e.g., your desires, motivations, tendencies, beliefs) as a basis.
Is this set of conditions sufficient to determine a general solution? Is it consistent so as to avoid contradiction or impossibility? We turn now to the no-boxing responses to the paradox.
No comments:
Post a Comment