Continued From: Part 2
No-Box Solutions: (i) Underdetermined
No-boxers claim that the paradox is either (i) underdetermined or (ii) overdetermined. We shall start with the first claim. (1) Kavka raises the worry that, since the problem does not explicitly state that the Predictor bases its prediction on a causal relationship, it may be an accidental relationship (Kavka 273). But whether it is accidental or causal will affect our use of possible worlds in order to determine which possible worlds are closest to the actual world, and hence, which possibilities are relevant to our assessment of outcomes. [19] For example, “likeness of causal laws generally takes precedence over likeness of particular events. But it is highly doubtful that likeness of coincidental correlations takes precedence over likeness of particular events” (Kavka 274). Thus, we will use one set of possible worlds to evaluate modality if the relationship is causal and another set if the relationship is accidental. Since the problem does not tell us the nature of the relationship, we cannot choose either set and so the problem is underdetermined. However, for reasons already stated, it seems the problem suggests that we should regard the problem as being non-accidental, and hence, as having a causal relationship involved along the lines of the informed prediction. So Kavka’s skeptical worry is initially met.
Kavka raises a further objection, arguing that, even if the relationship is causal, the same background conditions that have existed in prior predictions must continue to be present in order to support this continuing causal relationship in our current decision. “Not knowing how causation operates in the game, can we safely make this assumption?” (Kavka 273). Kavka clearly believes that we are not, and so the problem remains underdetermined. However, we have already specified the causal relationship between your choice and the Predictor’s prediction, that is, both are caused by your desires, motivations, tendencies, and beliefs. Furthermore, the problem does not invite us to assume that you are in a relevantly different situation from previous trials. Thus, every necessary background condition and causal relationship may reasonably be held to be fixed from previous trials. Of course there is the skeptical possibility that the causal laws or background conditions may suddenly change or be very different in some possible worlds. But given that we are assessing this problem in the actual world (or in possible worlds very close to the actual world), and since the specification does not depend on any unusual background conditions or causal laws, we should take for granted that the laws of nature and required background conditions will continue to hold. Thus, I do not think that Kavka’s worries are reasonable or charitable.
(2) Maitzen and Wilson believe that the problem is “ill-formed” and underdetermined in such a way that the underdetermination “blocks the very set-up of the problem[,…] regardless of variations in such things as the Predictor’s degree of reliability, the basis on which the prediction is made, or the amount of money in each box” (Maitzen and Wilson 151). The problem involves a “hidden” and “vicious” [20] regress that makes it impossible for anyone to even understand the problem, much less solve it. The regress is as follows. When presented with the problem, you can be asked: “how many boxes will you take?” You may respond that it depends on the circumstances. Which circumstances? “The answer, of course, is ‘circumstances in which you believe that the opaque box’s contents depend on the Predictor’s prediction of how many boxes you will take.’” (Maitzen and Wilson 153). But what circumstances are those? Well, ‘how many boxes you will take’ itself depends on what you believe the Predictor to have predicted about how many boxes you will take. Thus, the circumstances that determine how many boxes you will take are those ‘circumstances in which you believe that the opaque box’s contents depend on the Predictor’s prediction of how many boxes you will take in circumstances in which…’ and so on ad infinitum. Consequently, we have an endless regress.
One can trivially respond that the contents of the box do not depend on the Predictor’s prediction, and so choose two-boxing. But then “Newcomb’s problem has a trivial two-box solution which deprives the problem of any interest” (Maitzen and Wilson 154-5). The contents of the box are not fixed randomly, as we have already argued. Indeed, the problem explicitly declares that the contents are fixed by the Predictor in accordance with its prediction, so the trivial two-box response does not work. And so we are left with a regress. Thus, “the circumstances of a Newcomb’s choice turn out to be impossible to describe in finitely many words. Since none of us can understand an infinitely long description, none of us can understand the circumstances which allegedly define a Newcomb’s choice” (Maitzen and Wilson 154-5).
I believe that one can respond by refusing to accept their infinite and self-referential description of the circumstances as legitimate. So long as there is a finite and non-self-referential way of describing and specifying the circumstances, their objection fails. I believe that the way the problem has been specified does meet this criterion. This specification is finite and does not involve any self-reference, and so it is fully understandable. When asked about the circumstances of your choice, you can point to the criteria as giving the circumstances involving your choice. On the basis of this specification, you can appeal to either the dominance principle or calculations of expected utility [21] to justify your decision one way or the other and thereby make a (debatably) rational choice. Maitzen and Wilson can choose to describe the circumstances in a way that leads to a regress, but the problem need not be so described, and so it need not be incomprehensible or underdetermined in this way. Consequently, Maitzen and Wilson’s argument fails.
No-Box Solutions: (ii) Overdetermined
Having reviewed and rejected the underdetermined no-boxing solutions, I turn now to the overdetermined responses. (1) Slezak takes the paradox to be “perfectly clear, fully specified but formally paradoxical” (Slezak 285). However, he believes that the logical features of the paradox are overspecified and lead to a contradiction. He writes, “it is not that we can not understand the circumstances presupposed in the problem; rather, when we do understand them properly we recognize the logical incoherence of the problem and the pointlessness of the choice” (Slezak 295). Slezak locates the source of the paradox in the self-referential nature of the choice. As such, it is similar in structure to the liar paradox and leads to a contradiction (Slezak 295, 296). When faced with the choice, Slezak believes we have the propositions
(x) I choose (a)where (a) is my decision to one-box or two-box and (b) is the Predictor’s prediction of one-boxing or two-boxing. Since I hope to outsmart the Predictor, I two-box and hope that the Predictor predicted one-boxing. This means that I am choosing the opposite of what the Predictor predicted, or
(y) The Predictor predicts (b)
(x) I choose ~(y)However, since the Predictor is not wrong in its predictions, the Predictor predicts whatever I choose, or
(y) The Predictor predicts (x)So we can substitute and get
(x) I choose ~(The Predictor predicts(x))Since the Predictor is going to get the correct prediction, we can substitute (The Predictor predicts (x)) for (x) and get
(x) I choose ~(x)which means “I choose the opposite of whatever I choose” (Slezak 296). Since this is impossible, Slezak believes that the problem involves an internal contradiction. As he concludes, “the [Predictor] acts as an intermediary serving to externalize what is, in fact, a loop in one’s attempt to second-guess one’s self… [T]he [Predictor]… only extends the loop and does not essentially alter the self-contradictory nature of the decision problem” (Slezak 297).
I believe that Slezak’s argument can be resisted in several places. First, the argument relies on the claim that the Predictor is always right, and therefore, its prediction is equivalent to your choice. However, the Predictor’s prediction is not equivalent to your choice. The Predictor can get the prediction wrong. Slezak’s argument requires the Predictor to be perfectly reliable, and we have already argued that this is not a charitable interpretation of the problem. Second, suppose that the Predictor is perfectly reliable. The argument only shows that it is impossible to choose the opposite of what the Predictor predicts, and so you cannot outsmart the Predictor. There is no contradiction if the Predictor and the chooser are in sync. The argument leads to the simple conclusion that “I choose whatever the Predictor predicts” or the tautology that “I choose whatever I choose.” These are not problematic at all.
Third, the argument itself contains an internal inconsistency. Slezak assumes in one step that you choose the opposite of whatever the Predictor predicted, meaning that you could make the Predictor’s prediction false. But in the next step, he assumes that you did not (indeed, cannot) make the Predictor’s prediction false. Now if the Predictor is not perfectly reliable, then both situations are possible, but not at the same time. They are mutually exclusive premises and so must be conditional if they are used in the same argument. However, in Slezak’s argument they are not conditional and so it is no wonder that his argument leads to contradiction, for the argument is structurally contradictory. Thus I conclude that Slezak’s argument fails.
(2) Priest takes the Newcomb problem to be a rational dilemma in which one is rationally required to do incompatible things. In this case, this means that “you ought to choose one box, and that you ought to choose both boxes” (Priest 13). He argues that if you choose just the opaque box, then you get whatever is now in the opaque box. If you choose both boxes, then you get whatever is in the opaque box and the extra $10,000. Since choosing both boxes dominates choosing just the opaque box, it is rational to two-box. However, if you choose both boxes, then the Predictor knew that you were going to choose both boxes. So there is $10,000 in the clear box and nothing in the opaque box, and you will get $10,000. If instead you choose just the opaque box, then the Predictor knew that you were going to one-box. So there is $1,000,000 in the opaque box, which is what you will get. Since $1,000,000 > $10,000, you should one-box (Priest 13). Consequently, “one way or the other, one is going to be rationally damned. Ex hypothesi, rationality gives no guidance on the matter - or rather, it gives too much, which comes to the same thing” (Priest 15).
One can respond to Priest in several ways. First, we might agree with Priest that rationality initially recommends two contradictory strategies. But this may be due to the fact that rationality is a cluster concept that is “equally associated with both the evidential and the causal criteria, since, in the formation of that notion, circumstances in which their dictates would diverge were not anticipated. Thus, what to say about such circumstances is not determined by our present idea and involves some extension of it” (Horwich 443). Horwich believes that one can resolve the conflict within the concept of rationality and choose one principle over the other as being the most rational principle to follow. He argues that evidential theory (MEU) is the “more plausible candidate” compared to causal decision theory (and the dominance principle) (Horwich 443-4). [22] Whether he is correct to prefer evidential theory to causal decision theory is a matter for another paper. The point here is that one need not take rationality to be inherently contradictory. Instead, while both principles are rational to some degree, one principle may be more rational than the other, and hence, it is the principle one rationally ought to follow. Thus, there is no rational dilemma.
Second, what we should do may depend on which of these conditionals are true. But Priest’s argument relies on taking them all to be true at the same time. As Burgess notices, one of Priest’s conditionals implies that your psychological state was that of a one-boxer, while the other implies that your psychological state was that of a two-boxer (Burgess, “Conditional” 333). But one cannot have both a one-boxing and a two-boxing psychological state at the same time. Consequently, “it must not be imagined that if you one-box you will become rich (because your [brainscan] was that of a one-boxer), while also imagining that if you two-box you will remain poor (because your [brainscan] was that of a two-boxer)” (Burgess, “Conditional” 333). [23] Since not all of these conditionals can be true at the same time, and since this is necessary to generate the rational dilemma, Priest’s argument fails.
(3) Mackie’s article surveys many possible specifications and determines that some specifications lead to one-boxing while others lead to two-boxing, yet all diverge “in one way or another from what it is natural to take as the intended specifications” (Mackie 222). Of the one-boxing solutions, these rely on trickery, backward causation, repeated plays, or “a choice not about what to take on a particular occasion but about what sort of character to cultivate in advance” (Mackie 222). We have already rejected trickery and backward causation as being uncharitable specifications of the problem. Since you only have one opportunity to make this choice, the repeated plays scenario can also be rejected. And because you are in the second stage of the problem, it is too late to cultivate a one-boxing psychology in order to influence the Predictor’s prediction. Thus, “these situations are all off-colour in some respect” (Mackie 222). However, the two-boxing solutions are also “off-colour.” Mackie claims that solutions that recommend two-boxing are cases “where the player does not really have an open choice…, or where the seer does not really have predictive powers, and his past successes must be set aside as coincidences” (Mackie 222). Consequently,
[t]here is no conceivable kind of situation that satisfies at once the whole of what it is natural to take as the intended specification of the paradox… While the bare bones of the formulation of the paradox are conceivably satisfiable, what they are intended to suggest is not. The paradoxical situation, in its intended interpretation, is not merely of a kind that we are most unlikely to encounter; it is of a kind that simply cannot occur. (Mackie 223)Thus, the paradox is overdetermined and must be rejected as legitimate.
I agree with Mackie that taking the method of prediction to be purely coincidental is illegitimate, and so the Predictor must have genuinely predictive powers. However, I disagree with Mackie that in this situation the chooser does not really have an open choice, for reasons already stated. In fact, I find it very plausible that a Predictor, using extremely detailed information about you, could predict whether you one-box or two-box with a high degree of reliability and without relying on luck, even if your actions are not determined. We already know, through economics, psychology, and sociology, that people’s actions are predictable to a large degree using, among other things, facts about their beliefs, desires, family history, and socioeconomic status. Libertarian freedom is only in danger if people’s actions are completely predictable. However, we have already rejected the specification that the Predictor is perfectly reliable and hence, that your choice is perfectly predictable. All that needs to be true is that your choice is highly predictable based on facts about you, and this seems to be a very plausible possibility. Thus, I conclude to the contrary that this paradox, in its intended interpretation, is of a kind that can occur, and so Mackie’s argument fails.
Conclusion
Having shown that every major attempt to dissolve the paradox by arguing that it is either underdetermined or overdetermined has failed, I conclude that the paradox is legitimate. Its intended specification is clear and understandable and involves no contradiction or serious implausibility. However, though I believe the paradox to be legitimate, I am still unsure as to the choice one should make. There is a sense in which both choices are rational and irrational. The two-boxing choice is rational based on the fact that one cannot now actually change the contents of the box, and so one might as well two-box And yet two-boxers must squarely face the fact that if they two-box, they will likely only get $10,000. The Predictor will likely anticipate whatever reasoning they use to make their decision, and so their decision to two-box will just confirm the Predictor’s prediction. Consequently, they will not maximize the money they obtain. Similarly, one-boxers must believe that their choice, though not a cause of the prediction, is correlated with it because it derives from the same (e.g., their beliefs, desires, motivations, tendencies). Your best evidence as to what is now in the opaque box is your decision to one-box or two-box, and a one-boxing decision will be vindicated by finding the $1,000,000 that was there all along, confirming the highly reliable Predictor’s ability to predict your choice. So in a sense, you do now have some influence over the contents of the box.
Is this irrational? Two-boxers will think so. As Gibbard and Harper claim, “the moral of the paradox [is that if] someone is very good at predicting behavior and rewards predicted irrationality richly, then irrationality will be richly rewarded.” [24] However, one-boxers will counter that
one can only be amused by those advocates of [two-boxing] who… realize that takers of [both boxes] almost always get but $[10,000] whereas takers of [one box] almost always get $[1,000,000], and proceed to bemoan the fact that rational people do so much worse than irrational ones. Despite their logical scruples, they seem to have a curiously low standard of what constitutes a good argument, at least in the context of Newcomb’s Problem. Evidently they would rather be right than rich. (Bach 412)
Which of the two strategies is the most rational I will not argue for here, for I am still unsure myself. However, I am sure that the paradox simply cannot be dismissed on the grounds that it is underdetermined or overdetermined, and is therefore a pseudo problem. We find that when the problem is specified in the most charitable and reasonable way, Newcomb’s Paradox is a legitimate paradox, though it is still incredibly paradoxical. Thus, the ‘no-box’ solution is no solution at all.
Footnotes
[1] (Clark 142)
[2] For example, if the Predictor is accurate 95 percent of the time, then your expected utility for one-boxing is .95*$1,000,000 + .05*$0 = $950,000, which is greater than the expected utility for two-boxing, which is .05*$1,010,000 + .95*$10,000 = $60,000.
[3] That is, you will have $10,000 instead of $0 if the Predictor predicted one-box, and $1,010,000 instead of $1,000,000 if the Predictor predicted two-box.
[4] A variation of the problem in which one is explicitly forbidden to further specify the problem may be interesting to consider in its own right, but I will not pursue that variation here.
[5] Consider Burgess’ response on this issue: “To the extent that we are scientifically minded we tend to take little interest in a problem if we are simply told to accept that certain aspects of it are essentially inexplicable. On the one hand, if it is acknowledged that the supposedly inexplicable problem is one that could never exist, then questions about that problem are likely to be as interesting and scientific as questions about unicorns, goblins and fairies. And on the other hand, if it is acknowledged that the supposedly inexplicable problem is one that could exist, then we are essentially being told to reject the very assumption that makes the scientific outlook so interesting. For we are being required to renounce the idea that if something can happen, then that something is explicable. In other words, we are being forced to reject the assumption that if a scenario is possible (even if not with today’s technology), then that scenario could, with sufficient knowledge, be explained.” (Burgess, “Conditional” 330-1). I do not assume, like Burgess, that if something is possible, then it can be understood by us. The mind-body problem seems to be, as McGinn argues, an example of an actuality that is beyond comprehensibility. However, I would not forbid attempts to explain how the mind and the body relate. Similarly, I do not think that one should be forbidden from exploring the nature of Newcomb’s paradox.
[6] As Burgess explains it, “Because the first stage precedes the brainscan, it spans a time during which you have an opportunity to influence the nature of your [brainscan]. The significance of this can hardly be understated. By influencing the nature of your [brainscan], you can influence the alien's prediction and, in turn, influence whether or not the $1[million] is placed in the opaque box. After you've been brainscanned and have thus entered the second stage, you are no longer in a position to influence whether or not the $1[million] is placed in the opaque box” (Burgess, “Unqualified” 280).
[7] Consider the SEP entry in the article on causal decision theory: “In Newcomb's Problem an agent may choose either to take an opaque box or to take both the opaque box and a transparent box. The transparent box contains one thousand dollars that the agent plainly sees. The opaque box contains either nothing or one million dollars, depending on a prediction already made. The prediction was about the agent's choice. If the prediction was that the agent will take both boxes, then the opaque box is empty. On the other hand, if the prediction was that the agent will take just the opaque box, then the opaque box contains a million dollars. The prediction is reliable. The agent knows all these features of his decision problem” (Weirich). Notice the past tense and the explicit assertion that the prediction has already been made when you are confronted with the choice. Therefore, contrary to Burgess, you are already in the second stage.
[8] McKay writes that “the deposit in the boxes has already happened, and can no longer be affected by you - or by anyone at all… you cannot affect the prior actions of the Predictor” (McKay 187-8). Maitzen and Wilson also agree (Maitzen and Wilson 157), as does Burgess, who writes that “after you have been brainscanned and have thus entered the second stage, you are no longer in a position to influence whether or not the $1[million] is placed in the opaque box” (Burgess, “Conditional” 336).
[9] Thanks to Tanya Kostochka for pointing this out.
[10] (Ahern 486)
[11] Someone may wish to discuss a solution involving 4**. Again, this may be an interesting variation, but I think it is less true to the original intentions of the paradox.
[12] Many philosophers assume this explicitly in the problem. For example, Maitzen and Wilson write that “the crucial assumption of Newcomb’s problem [is] that you do wish to maximize your winnings” (Maitzen and Wilson 154). Priest also claims that your aim in choosing is “to maximize your financial gain” (Priest 12). As such, I do not take this to be an unfounded addition to the original problem. If someone asserts that the original formulation is underdetermined precisely because the chooser’s desires and attitude toward risk are unstated, then I think one must concede that the original problem is underdetermined. However, it is not underdetermined in a very interesting way, since the paradox no longer arises from the formal features of the problem but from an indeterminacy in what “your” desires and attitudes are. Thus, the “rational” answer will vary from person to person (e.g., if one needs a guaranteed $10,000, it is obviously rational to two-box). However, such a solution trivializes the problem when it seems that no solution should be trivial, This further specification prevents a trivial solution and adheres to what I take to be the original, though unstated, intentions of the problem.
[13] This objection also applies to the foreknowledge possibility. If the Predictor foreknows what will happen or is outside of time and so has already “seen” what will occur in some sense, then the Predictor is not really predicting your choice, but reporting your choice through its distribution of money in the boxes.
[14] Someone may suggest that perhaps the Predictor is infallible, but makes false predictions on purpose, perhaps to give the impression of fallibility. However, the problem intends us to treat the Predictor as trying its best to predict what you will decide to do, and not as engaging in some elaborate form of trickery, so this is an irrelevant and uncharitable response.
[15] As Burgess writes, “if Newcomb’s problem is presented in the supposedly inexplicable manner of the side-show charlatan it is worth discussing only to the extent that such discussion exposes the fraud” (Burgess, “Conditional” 332).
[16] Mackie worries that this interpretation makes “the question ‘What is it reasonable for the player to do?’ […] idle… Each player will rigidly follow his own characteristic style of reasoning and so do whatever the psychologist-seer has predicted, which may or may not be in accordance with our pseudo-recommendation” (Mackie 219). That is, determinism makes the very notion of a ‘rational choice’ incoherent, for the agent will simply do whatever he or she is determined to do, and the psychologist will predict this. However, a libertarian interpretation of free will is open to us, and a suitable notion of choice can be developed by compatibilist or soft determinist approaches to free will. Furthermore, if the Predictor is not infallible, it is not simply describing what will happen in the future; it is making a genuine prediction that could turn out to be wrong.
[17] Even modest predictions are enough to generate the paradox. Suppose that the Predictor is only a measly 60 percent reliable. Then one-boxing still yields more money than two-boxing: .6*$1,000,000 + .4*$0 = $600,000 > $410,000 = .6*$10,000 + .4*$1,010,000. Such predictive capacities are extremely plausible, and even currently actual in some fields with respect to some questions.
[18] Others may find further specifications necessary, but these specifications are those I take to be legitimate and necessary in order to assess the claims of no-boxers and to determine that the no-boxing solution is not the correct response to the problem.
[19] Phyllis McKay’s objection is similar to Kavka’s and can be answered in the same way. She claims that since the problem does not specify whether the relationship between your choice and the prediction is causal or not, one cannot choose: “If you still think there must be no causal connection since the action of the Predictor really is in the past, you should two-box. Alternatively, if you think there probably is some cheating going on undetected by you, then you think there probably is a causal connection, and you should one-box” (McKay 188). Having specified the relationship as involving a common cause, McKay’s criticism no longer applies.
[20] (Burgess, “Conditional” 321)
[21] As Burgess writes, “you simply do not need to predict your own choice… In fact you can calculate conditional expected payoff values for each of your two options even when you have no idea what you will decide to do” (Burgess, “Conditional” 328).
[22] He claims that, first, the causal criterion is not “uniform,” requiring us to divide probabilistic states into parts that are “causally independent of the choice, and those parts… that are not” (Horwich 444). The evidential criterion is comparatively simple and uniform. Second, “there are circumstances in which every single one of an agent’s choices will be branded by the causal rule as irrational” (Horwich 445). Third, the causal theory “embodies an arbitrary time-bias,” insisting that causes temporally precede effects (Horwich 446). Fourth, and finally, causal theorists are committed to conflicting desires and hopes: “they recommend taking the $[10,000] in Newcomb’s situation, they also recommend attempting to make oneself into the sort of person who will decline it… This means that you simultaneously have a pro-attitude towards the agent’s hoping to X rather than Y, and a pro-attitude towards Y rather than X actually being done” (Horwich 448).
[23] More fully, “When it is said that the conditional expected payoff value for one-boxing is high, it is implicitly assumed (for reasons of causal coherence and plausibility) that your brainstate at the time of the brainscan was that of a one-boxer. But when it is then said that the conditional expected payoff value for two-boxing is low, it is implicitly assumed (again for reasons of causal coherence and plausibility) that your brainstate at the time of the brainscan was that of a two-boxer. Either of these two assumptions could be adopted, but no one can consistently adopt both” (Burgess, “Conditional” 333).
[24] Quoted in (Slezak 281).
Works Cited and Consulted
Bach, Kent. “Newcomb’s Problem: The $1,000,000 Solution.” Canadian Journal of Philosophy
17.2 (1987): 409-425. JSTOR. Web. 19 Jan. 2012.
Bar-Hillel, Maya, and Avishai Margalit. “Newcomb’s Paradox Revisited.” The British Journal
for the Philosophy of Science 23.4 (1972): 295-304. JSTOR. Web. 19 Jan. 2012.
Burgess, Simon. “Newcomb’s Problem and Its Conditional Evidence: A Common Cause of
Confusion.” Synthese 184.1 (2010): 319-339. Springer Link. Web. 19 Jan. 2012.
---. “The Newcomb Problem: An Unqualified Resolution.” Synthese 138.2 (2004): 261-287.
JSTOR. Web. 19 Jan. 2012.
Clark, Michael. Paradoxes from A to Z. 2nd ed. New York: Routledge, 2007. Print.
Horwich, Paul. “Decision Theory in Light of Newcomb's Problem.” Philosophy of Science 52.3
(1985): 431-450. JSTOR. Web. 19 Jan. 2012.
Kavka, Gregory. “What is Newcomb’s Problem About?” American Philosophical Quarterly
17.4 (1980): 271-280. JSTOR. Web. 19 Jan. 2012.
Mackie, J.L. “Newcomb’s Paradox and the Direction of Causation.” Canadian Journal of
Philosophy 7.2 (1977): 213-225. JSTOR. Web. 19 Jan. 2012.
Maitzen, Stephen, and Garnett Wilson. “Newcomb’s Hidden Regress.” Theory and Decision
54.1 (2003): 151-162. JSTOR. Web. 19 Jan. 2012.
McKay, Phyllis. “Newcomb’s Problem: The Causalists Get Rich.” Analysis 64.2 (2004): 187-
189. JSTOR. Web. 20 March 2012.
Priest, Graham. “Rational Dilemmas.” Analysis 62.1 (2002): 11-16. JSTOR. Web. 20 March
2012.
Sainsbury, R.M. Paradoxes. 3rd ed. New York: Cambridge, 2009. Print.
Slezak, Peter. “Demons, Deceivers and Liars: Newcomb’s Malin Génie.” Theory and Decision
61.1 (2006): 277-303. Springer Link. Web. 19 Jan. 2012.
Weirich, Paul. “Causal Decision Theory.” Stanford Encyclopedia of Philosophy. Stanford
University, 25 Oct. 2008. Web. 23 April 2012.
No comments:
Post a Comment