Monday, February 29, 2016

The Conditions and Significance of Having a Proper Name (1)

The below paper was written by me during a Philosophy of Language course I took while completing my MA in Philosophy at Northern Illinois University.  It was submitted on May 11, 2011 and remains as I submitted it (apart from necessary formatting changes).
-------------------------------------------------------------------------------------------------------------------------

Proper names are singular terms that refer uniquely to their referents in virtue of being bestowed upon and attached to these referents.  Consequently, they are available to use over an extended length of time because they continue to refer to an individual even when that individual undergoes a descriptive change.  Common names, on the other hand, denote a class of individuals and are applied to specific individuals on the basis of descriptions. If an individual’s description changes, many common names that were once attributed to the individual may no longer apply to that individual.  As such, proper names are intimately tied to the identity of an individual in a way that a common name is not, which is why in many cultures the proper name is “considered to be part of, or identical with, the soul, self, or personality of its bearer” (Bean 310-1).  What then does having a proper name suggest about the individual that bears it?  To answer, I will focus on the conditions under which an individual has a proper name by looking at two broad accounts: one that focuses on the utility of proper names and another that uses proper names to convey significance.  Using examples and empirical studies, I will conclude that a version of the significance account is correct.  In particular, I will argue that an individual has a proper name iff (i) a social community has bestowed that individual with a name and accepted that name as referring to that individual, (ii) there is a continued interest in the identity of the individual over time, (iii) the individual has intrinsic or sentimental value and has had that value affirmed by the community, and (iv) the individual has (or is treated as having) a personality. 

In his Essay Concerning Human Understanding, John Locke addresses two questions: first, why do we not give proper names to all individuals, and second, why do we give proper names to the individuals that we do so name?  To answer the first, Locke proposes that giving proper names to all individuals is psychologically impossible because it is beyond the capacity of the human mind to remember so many uniquely referring terms (Locke, chp. 3, sec. 2).  As such, we must limit the number of individuals to which we assign proper names.  Second, a unique reference for all individuals would be “useless” since it wouldn’t serve the chief end of language, which is the communication of thoughts to others (Locke, chp. 3, sec. 3).  There would be too many proper names to keep track of, not all of which are known to the members of a conversation.  One would have to resort to common names to continue the conversation and so relying completely on proper names would be useless and inefficient.

Already in Locke, we can see two purposes in using proper names.  In accordance with the first response, we use proper names only for individuals that are special in some way, whose cognitive priority rises above other properly unnamed individuals and so deserves some of our limited cognitive capacity.  In accordance with the second response, proper names provide an efficient means of communication in circumstances in which all members of a conversation know of the individual to which the proper name refers.  Instead of having to repeat a description using common names, one can simply use a proper name to pick out the unique individual relevant to the conversation, thus making the conversation more efficient and more precise.

Though related, these two responses are different.  The second response asserts that proper names are used to “single out an individual from all other individuals” while the first response asserts that proper names are used to “to mark off an individual’s individuality” (Jeshion 372).  In other words, the second response claims that proper names simply help us to pick out and refer to a particular individual.  On the contrary, the first response states that proper names stress the importance or significance of the individual so named.  The second proposed understanding of proper names is what Jeshion calls the Semantic Utility Account of Proper Names whereas the first proposed view is called the Significance of Names Account.

Which of these two accounts is correct?  The Semantic Utility account of proper names claims that we give proper names to certain individuals in situations where there is:  (1) a wide circle of communicators that wishes to refer to a particular individual, (2) these communicators have “an interest in the continuing identity of the particular across time”, and (3) no short description exists that the group of communicators can use to pick out the individual uniquely (Jeshion 372).  In sum, a proper name is given for the fundamental reason that it “will be useful for the purposes of efficient, economical communication” (Jeshion 376).

This view does have an initial plausibility.  People, for example, meet each of the three criteria.  An individual person must move through a wide circle of communicators that all have an interest in that person’s identity over time.  Furthermore, given the complexity of what persons are, there is usually no short unique description that can pick out one person from another.  Most artifacts, on the other hand, fail at least one of the three criteria, if not all three.  For example, a particular basketball only needs to be referred to by a small circle of communicators (e.g., a family or group of friends), there is little interest in the continuing identity of the ball (since one basketball is generally as good as any other), and there is a short description available to pick out the ball (e.g., Fred’s basketball).

However, there are several objections that can be made that show that the Semantic Utility account of proper names does not provide necessary and sufficient conditions for when a proper name is given.  More specifically, both conditions (1) and (3) are not necessary.  We can easily imagine cases in which there is a very small circle of communicators that wishes to refer to a particular individual and does so using a proper name even though a short description is available for use (Jeshion 377).  For example, our family pets are given names even though a short description would suffice (e.g., “the dog”, “the brown dog”, “Tom’s dog”).  Similarly, there may be a very wide circle of communicators that wishes to refer to a particular individual but a proper name is not used even though a short description does not exist (Jeshion 378).  For example, collector’s items are discussed amongst a wide variety of collectors but they are not given proper names even though their descriptions can be quite long and complex.

Condition (2), however, is met in these examples and so it does appear to be necessary.  This is plausible since we would not assign a proper name to an individual whose identity held no long-term interest for us.  However, this condition is not sufficient on its own.  There are many individuals whose identity we have a long-term interest in maintaining over time but that we do not properly name.  For instance, most people do not name their cars or their houses even though their continued identity is of great interest to their owners. 

The difficulty with (2), and the Semantic Utility account as a whole, is that it does not specify what kind of interest we have in the individual’s continued identity. This is something that the Significance of Names account does not fail to do.  According to Jeshion, our interest in the properly named individual is concerned with that individual’s individuality.  We give proper names to individuals that “we construe as possessing individuality [and which] we regard as having intrinsic or relational value, beyond their value as an instance of a certain kind” (Jeshion 373).  Bean similarly states that individuals are given names when they are “considered so unique or significant in their own right as to be distinguished and individuated” (Bean 309).


Continued: Part 2 



 

The Conditions and Significance of Having a Proper Name (2)


Continued from: Part 1


These and similar statements can be fleshed out into a fuller account.  First, notice that whether or not an individual has a proper name depends on the social and linguistic community.  According to Kripke, an individual bears a proper name in virtue of being directly assigned that name through an initial “baptism” that rigidly fixes the reference (Bean 307).  However, the “baptism” must be initiated and accepted by the community.  Bean more fully states that “firstly a proper name works, i.e. denotes an entity, because it has been rightfully bestowed and consequently ‘belongs to’ its bearer.  Secondly, for the name to be correctly used, members of the community must concur on the identity… of what it is to which the name belongs” (Bean 307).  Thus, it is the decision of members of the community to give a proper name to an individual in accordance with their attitudes towards the individual.

This theory bears out in practice.  Consider the naming ritual of the Ibo of Nigeria.  A newborn child is not given a name for 28 days, but is instead referred to as the “new child” (Wieschoff 212).  After 28 days the name is determined by social conditions: the social standing of the parents, the socially perceived character of the child, and the community’s acceptance of any name given.  Even the meanings of names reflect social conditions and events surrounding the birth of the child (e.g., the place of birth, the financial status of the parents, future hopes for the child) (Wieschoff 214).  Bean notices that a similar pattern is followed in other cultures: the “bestowal of a child’s name is often the duty of the parents, but is as likely to be the duty of a senior kinsman or of a ritual specialist and the participation of members of the larger community is usually required” (Bean 309).

Consider our own experiences.  In our culture, a baby is given a name by his or her parents and this is accepted by the community.  Even when an adult changes her name, it is not until her name has gone through a legal process and is accepted by the legal system (which acts on behalf of the social community) that her new name is properly her name.  One can also imagine a person attempting to give herself a nickname but nobody calls her by it.  We would not say that this is her name.  Her social community must first accept the nickname as referring to her before it becomes hers.  Thus, a first condition on an individual’s having a name is that (i) a social community has bestowed that individual with a name and accepted that name as referring to that individual.

For reasons already given, we should include condition (2).  So if an individual has a proper name, (ii) there is a continued interest in the identity of the individual over time.  However, now we can specify what kind of interest there is in the individual.  Jeshion gives what she calls the Significance Guides Naming Principle: “an agent can name an individual only if she accords intrinsic or relational significance to that individual” (Jeshion 374).  In other words, one names an individual in order to recognize that individual’s intrinsic or relational worth.  Intrinsic worth is worth that an individual has of its own and for its own sake.  It cannot be bestowed but only affirmed.  Thus, if someone names a being that has intrinsic worth, one is recognizing and affirming that intrinsic worth instead of giving it worth. 

Relational worth can be either instrumental worth or sentimental worth.  An individual is instrumentally valuable if it is useful in some way.  However, we do not name individuals that are merely instrumentally valuable.  An individual that is merely instrumentally valuable will have the same value as any other member of that kind and will therefore not be singled out as an individual with special significance.  Our cars, tools, and kitchen appliances are instrumentally useful, but we do not tend to name them because we tend to regard them as replaceable.  On the other hand, individuals that have sentimental worth are not replaceable.  They may have the same instrumental value as another member of their kind, but they have more worth to us because we have attributed worth to them through our sentiments and thereby singled them out as individuals.  For example, a teddy bear may be just as good as any other teddy bear in terms of functionality, but a child’s teddy bear is his or her teddy bear and as such is bestowed with worth.

So it seems that we only name individuals when they have intrinsic value (not attributed by us) or when they have sentimental value (attributed by us).  It is important to note, however, that naming the individual does not add worth to the individual (for naming presupposes the worth). There is no internal change with respect to the individual so named.  Instead, the naming constitutes an extrinsic change in relation to the community.  As Jeshion states, naming “underscores or enhances the name’s referent’s significance for those that think of that individual through the name” (Jeshion 374).  Similarly, Bean notices that in most cultures, “the bestowal ritual usually coincides with or constitutes the child’s acceptance as a member of his group, his recognition as a social person” (Bean 310).  The giving of a name affirms that person’s value and place within the community.  Consequently, our third condition is that an individual has a proper name only if (iii) the individual has intrinsic or sentimental value and has had that value affirmed by the community.

However, our conditions are not yet sufficient.  We do not name all individuals that we believe and affirm to have intrinsic or sentimental value.  Consider some of Jeshion’s own examples, which can be used against her position since she seems content with conditions like (i)-(iii).  For instance, while it is true that we do not name most of our clothes even though we have a long term interest in their identity, Jeshion explains this by saying that we do not regard our clothes as significant individuals.  However, there are items of clothing that are taken to be significant as individuals but are still not named (e.g., a wedding dress, a deceased relative’s jacket).  Similarly, while some plants remain unnamed because we only regard them as instrumentally valuable (e.g., Jeshion’s tomato plants), other plants do have sentimental value and yet we still do not name them (e.g., an old oak tree on one’s property).

Animacy might be proposed as a fourth condition, but this fails.  In a study using animals, the words associated with pet animals (e.g., dogs, cats) were taken to be proper names while those associated with non-pet animals (e.g., bees, snails, caterpillars) were not taken to be proper names, that is, until they were introduced as belonging to the experimenter (Jeshion 384).  Jeshion rejects animacy on this basis.  But this study does not show that animacy is not a necessary condition, for the children could have believed that animacy was simply not sufficient since the bees, snails, and caterpillars did not have any intrinsic or sentimental value.  However, animacy is not a condition of naming individuals.  For example, a pet rock is given sentimental value by its owner and who takes an interest in its identity over time.  However, the rock has no animacy even though it is given a name.

What then is lacking in our conditions?  I propose that those individuals we name have (or are treated as having) a personality.  By personality, I mean that the individual may be taken to have (or may be artificially described as having) a certain character, intentional states, or unique personal qualities.  This is similar to but importantly different from animacy, since animacy tends to imply that a being has personality, but the converse is not the case.  Consider some evidence for this claim  In child development research, “children exhibit a strong tendency to interpret a novel word as a proper name if it is applied to a person or person surrogate (a doll), but withhold the proper name interpretation for artifacts – blocks, shoes, toy cars and planes” (Jeshion 384).  The person or person surrogate is regarded as having intentional states while the artifacts are not.  However, when the artifacts are treated as having personalities, there is a different result.  In another study, “children readily allowed the application of a proper name to foam geometrical objects so long as they were merely described in intentional terms – as having certain mental states” (Jeshion 384).  Similarly, pet rocks are given faces and are described as though they had mental states, thus qualifying them as significant individuals deserving of names.

In a different study, Legerstee concludes that “[infants] recognize people as social stimuli (they vocalize, smile, alternate their gazes and imitate their actions) and objects as inanimate stimuli” (Legerstee 63). This can be combined with Katz’s observation that “within certain classes of objects (e.g., people), the children first discriminate individuals and then learn their names, whereas among other classes of objects (e.g., spoons) they do not discriminate individuals, and learn names only for the class” (Katz 469).  Together, these studies suggest that names are properly ascribed to social entities, and social entities have personalities.  As such, I conclude that an individual has a proper name only if (iv) the individual has (or is treated as having) a personality.  Joined with the other three conditions, it seems that an individual has a proper name iff (i) a social community has bestowed that individual with a name and accepted that name as referring to that individual, (ii) there is a continued interest in the identity of the individual over time, (iii) the individual has intrinsic or sentimental value and has had that value affirmed by the community, and (iv) the individual has (or is treated as having) a personality.

Now consider some possible counterexamples.  Yachts and racecars are often named, but we know that they do not have mental states even though their owners do have a long-term interest in their continued identity and their owners have sentimental feelings for them.  Jeshion suggests in a footnote that we “anthropomorphize” both yachts and racecars due to a societal convention (Jeshion 379).  Although Jeshion does not say, I believe that yachts and racecars are anthropomorphized because we attribute personality to them.  We speak as though they had temperaments and wills; we talk to them as though they can hear us.  Such anthropomorphizing explains why “adults may assign proper names to artifacts of some kind (e.g., boats, cars), [but] young children do not expect such objects to receive proper names” (Hall “Semantic” 1316).  Adults have been exposed to societal conventions and have developed sentimental attachments for their yachts and cars whereas children have not.  However, as one grows and is exposed to such conventions, one takes more and more entities to be “social.”

 What about places?  Cities, states, islands, parks, and even certain houses (e.g., large European estates) have proper names.  We have a long term interest in them and we do regard them as having sentimental value as unique individuals.  But do they have personality?  Again, we often talk about them as though they had a personality or character by using terms that are intentional.  Each place has a certain feel to it, a life of its own, or an atmosphere that distinguishes it from other places.  Similarly, natural objects like mountains and trees are often described as having personalities and intentional thoughts.  For example, a significant tree is said to have seen and heard much in its long life.  A mountain is treated as having a will that is to be overcome by its climbers. 

 Perhaps these cases are a bit of a stretch for the fourth condition.  However, this tension can be relieved by recognizing that while each condition is necessary, each need not be doing equal work.  For example, there is a very great long-term interest in maintaining the identity of a mountain that is shared by many people.  This can counteract the deficiency in sentimental value or lack of personality that the mountain has.  Similarly, a named tree may have great sentimental value but lack a lot of distinguishing character.  Nevertheless, it seems to me that these four conditions (or something very much like them), are necessary and sufficient for an individual’s having a name.

Finally, consider a few implications.  If having a name signifies social bestowal and acceptance, a long-term interest in one’s identity, intrinsic or sentimental value, and a personality, then denying that an individual has a name (or refusing to give or to use a name) is (or can be taken as) a denial that the individual meets one of these conditions.  Jeshion points out that when we fail to name a pet animal, this is “normally interpreted as regarding the animal as insignificant, or replaceable, or otherwise somehow valued only as an instance of its kind, not, in the first instance, as an individual” (Jeshion 379).  She also gives the example of dog breeders that discourage their children from naming the puppies that will be sold to prevent sentimental attachments.  Similarly with chickens and cows that will be eaten.  It is better to not name them and in so doing deny that they have any intrinsic value and to prevent sentimental value from forming.

Mary Phillips notices that laboratory animals are rarely given proper names.  The dog or cat in the lab is regarded as “ontologically different” from the pets at home (Phillips 119).  There is a long term interest in the continuing identity of the lab animal because the experiment involving it is done over a long period of time.  However, the interest in the animal is related only to its instrumental worth in performing the experiment.  Compare this to animals in the pound or in pet shelters.  Even without owners, they are immediately treated with affection and are thus given temporary names to emphasize their individuality and significance.

Consider also the example of stillborn babies.  In some cases, the baby is named “as a way of dignifying and underscoring importance” (Jeshion 379).  However, other couples refrain from naming their stillborn baby.  This is done, “not because they’d not think or refer to the baby.  It has rather to do with somehow not enhancing its individuality to them or shielding themselves from the psychological effects of thinking of the baby by name” (Jeshion 379).  The couple is attempting to forestall any sentimental value.  By refusing to name, one prevents oneself from becoming attached to the individual because one denies its significance as an individual.  This explains why using a demonstrative or description to refer to an individual that has a name is taken to be insulting.  By not using the name that that individual was given, one is refusing to acknowledge their significance as an individual and (implicitly) denying that they have intrinsic or sentimental worth, that they are worthy of long-term interest, or that they have a personality.

 In conclusion, this paper has shown that a proper name is much more than a device used to facilitate communication.  Instead, a proper name is used to pick out a unique individual because of that individual’s worth, personality, and long-term interest in a social community.  One might say, as the Saami people do, that by acquiring a proper name a particular individual moves from merely “being labeled” to “being” (Anderson 186).  While common names merely label individuals, proper names recognize an individual’s individuality.  Thus, we should conclude that a proper name, while often semantically useful, primarily marks and underscores an individual’s significance and that is why it is used. 



Works Cited and Consulted


Anderson, Myrdene. “Proper Names, Naming, and Labeling in Saami.” Anthropological
Linguistics 26.2. (1984): 186-201.  JSTOR. Web.  29 March 2011.

Bean, Susan.  “Ethnology and the Study of Proper Names.” Anthropological Linguistics 22.7
(1980): 305-316.  JSTOR. Web.  29 March 2011.

Gelman, Susan, and Marjorie Taylor.  “How Two-Year-Old Children Interpret Proper and
Common Names for Unfamiliar Objects.”  Child Development 55.4 (1984): 1535-1540. JSTOR. Web.  29 March 2011.

Hall, Geoffrey.  “Acquiring Proper Nouns for Familiar and Unfamiliar Animate Objects: Two-
Year-Olds' Word-Learning Biases.” Child Development 62.5 (1991): 1142-1154. JSTOR.
Web.  29 March 2011.

Hall, Geoffrey. “Semantic Constraints on Word Learning: Proper Names and Adjectives.”  Child
Development 65.5 (1994): 1299-1317.  JSTOR. Web.  29 March 2011.

Jeshion, Robin.  “The Significance of Names.”   Mind & Language 24.4 (2009): 370-403.  Wiley
Online Library.  Web. 29 March 2011.

Katz, Nancy, Erica Baker, and John Macnamara.  “What’s in a Name?  A Study of How
Children Learn Common and Proper Names.” Child Development 45.2 (1974): 469-473.  JSTOR. Web. 29 March 2011.

Legerstee, Maria. “A Review of the Animate-Inanimate Distinction in Infancy: Implications for
Models of Social and Cognitive Knowing.” Early Development and Parenting 1.2 (1992): 59-67. Wiley Online Library.  Web. 29 March 2011.

Locke, John. Essay Concerning Human Understanding: Book III. Oregon State University, n.d.
Web. 8 May 2011.

Phillips, Mary.  “Proper Names and the Social Construction of Biography: The Negative Case of
Laboratory Animals.” Qualitative Sociology 17.2 (1994): 119-142.  Web.  Springer Link.  29 March 2011.

Strawson, P.F.  Subject and Predicate Logic and Grammar.  Burlington, VT: Ashgate, 1974. 
Print.

Wieschoff, H.A. “The Social Significance of Names among the Ibo of Nigeria.” American
Anthropologist 43.2 (1941): 212-222.  Wiley Online Library. Web. 29 March 2011.  

Thursday, February 25, 2016

Philosophy of Analytics, Lesson One: Begin with the End

Introduction

I have a Masters in Philosophy.  More specifically, I have a Masters in Analytic Philosophy, which is the method of philosophy most popular in the United States and the United Kingdom (as opposed to Continental/Existential/Phenomenological Philosophy).  When I began this blog, I titled it "Philosophical Analytics" to emphasize my interest in both philosophy and analytics (i.e., data related analyses and visualization) and my desire to continue to engage in both disciplines.

Recently, after reading the title of my blog, someone asked me what my philosophy of analytics was.  I had to think a while on this as I had not really formulated my impressions on doing analytics into well formed thoughts, principles, or assertions.  But this is a good question, a perfect question, for me.  This is the first of a series of blog posts that are an attempt to rectify that and to put forth my analytics philosophy thus far.

A Philosophy of Analytics

This blog is not about  how to build a predictive model in R, chart in Excel, or pull a query in SQL.  It's aim is more philosophical and methodological.  We could even call it "meta-analytics".  It will address the why or the why not behind the specific how.  Why should we create this dashboard? Why is this predictive model being used?  We are reflecting on the purpose of using various analytical tools and products.

But we can take a step back even further.  Why do analytics at all?  What is the purpose of analytics? How do we do analytics well?  The answers may be obvious to some, but others may have never asked these questions before.  We know that some people like shiny, flashy, and brightly-colored charts, graphs, slides, and dashboards.  They are impressed by regression lines and cool statistics.  But apart from the job security and budgetary victories that may be scored by throwing analytics left and right, what is the real point of analytics?

This blog post (and others like it) will focus on the nature of analytics: who uses it or does it, what it is, when is it appropriate to use, where should it be used, why is it used, and how best to use it.

Lesson One: Begin with the End

Aristotle specified four "causes" to account for or describe any object:
  • the material cause: what the object is materially made of.
  • the formal cause: how the object's material is arranged or shaped.
  • the efficient cause: what brings about, creates, or changes the object.
  • the final cause: the object's purpose, it's end.
Our object is analytics.  It is materially made of Excel sheets, Tableau dashboards, SQL queries, big data stores, R regressions, and Azure predictive models.  It's form consists of colors, lines, plots, and tables placed on billboards, websites, and desktops.  It is efficiently caused by developers from backend to frontend and by users that consume and provide feedback. But it's purpose?  What is it all about? In short:
The end, the purpose, the telos of analytics is this: data-driven decision making.
Why do we make charts or build models?  Why do we collect terabytes of data?  Surely not just because it's fun.  We do these things in order to gain insight into our domain of interest, to understand what is going on. Just because?  No!  We do this because we have to make decisions in our domain of interest, and we want to make good decisions. 

A business needs to understand what it's customers want through sales trends.  It needs to know how to allocate resources, where budgets need to be cut or expanded.  It does so by looking at its sales and financial data appropriately summarized and charted.  A politician needs to know where differing demographics stand on multiple issues so that he or she can most favorably present himself or herself to that demographic.  This is done through collections of voting records that have been grouped according to these demographics and along the lines of key issues.

Perhaps you are simply interested in learning more about a subject (as I often am), and perhaps your explorations have no immediate practical application.  Ok, fine.  Then we can describe the purpose of analytics in a more sophisticated way:
The purpose of analytics is to (1) justify or change beliefs with the use of evidence in the form of data  (2) in order to generate true (or truer) beliefs that can then be used to make decisions related to one's goals, and that, (3) because they are more in accord with reality, are more effective in bringing about one's goals.
 Let's break this down.  First, we are gathering data and presenting it to ourselves or to others to change or strengthen our beliefs about a given domain.  We are trying to understand what really is going on in the world, changing our beliefs about it if necessary.  If the data and subsequent analysis is good, then it will accurately represent reality in a useful way.  For example, we collect weather data and build models that predict what the weather will be like tomorrow so that we can have a true belief about the weather tomorrow.

Now why might that be important?  This leads to the second part.  We take our justified beliefs, supported by the evidence, to make decisions with regards to our goals. If I have the goal of having a good time while hiking outside, what I decide to wear outside will have a direct impact on whether I do have a good time if, contrary to my belief that it will be sunny, it in fact rains and I have decided to hike in a t-shirt and shorts.  If my belief were changed by the evidence that it would in fact rain, then I could make the decision to wear a jacket and have a more enjoyable time.

And this leads to the third part, that data supported decisions are more effective in bringing about our goals. We use data to build a model of reality in a specific domain, and then using that model to represent reality, we make a decision that is lived out in reality.  In the case above, I would look at the data related to weather patterns and historical trends along with the predicted outcomes for what the weather will be like for my hike.  Because the data and models suggest that it will rain, I make the decision to wear and rain jacket.  Consequently, when it does rain, I am not soaked and I still have a good time hiking (which I would not have had if I had gotten soaked).

The goal of my hike was to have a good time.  And by using the analytics related to weather to inform my choice of clothing, I was able to have a good time.  The data and models enabled my hike to be successful, that is, effective, in bringing about my goal.  I was empowered to make a decision more in accord with reality precisely because I had evidence to support my belief that it was going to rain.

Putting It Into Practice: What is Your End?

Ok, so the purpose of analytics is data-driven decision making, that is, to justify/change beliefs that lead to decisions that accomplish one's goals.  So what?  Well, thinking about this purpose of analytics can radically change how you approach your analytics projects.  Most importantly, if the point of analytics is to help one to accomplish one's goals, then before any analytics project can be undertaken, this question must be answered: what are the goals?

Have any of you been told to look at the data, build a dashboard, create a model, and then report back with what you find?  That's like being put into a closet filled with junk and told to find something useful.  What is useful?  It all depends on what one's goals or aims are.  Any number of items may be useful, but they will be useful with respect to certain goals (and not others).

In a business, we can always tie it back to money: increasing profit/revenue/sales or decreasing expenses.  But let's be more specific.  In fact, be as specific as possible.  For your organization, how does it make money specifically?  What role does it play in the larger context of the company?  What are the specific goals of your organization?  What decisions need to be made that could use better data to drive those decisions?

Perhaps your organization saves money by improving code efficiency.  Perhaps your team increases sales by reducing transaction times.  Maybe you improve revenue by increasing the conversion rate of sales through the company website.  Whatever it is, think about the specific goals of your team/organization/company, and then plan your analytics solution to capture, measure, and display the progress in meeting those goals.

Suppose that your organization's goal is to improve click-through rates on the website. Once you know that, you know that you need to gather data related to clicks and visits by users, locations, dates.  And you know that you need to calculate the click-through rate and display this over time (perhaps a line graph) to determine if the organization is improving in meeting its goal. You gather the data with sufficient granularity so that locations that are lagging in click-through rates can receive targeted investment.   Other important decisions can be informed by the data that you have gathered.

Without having this specific goal in mind, you would have no idea what data to gather, what measures to calculate, or what visualizations to create.  And your project wouldn't go anywhere or provide any business value to anyone.

Conclusion

You wouldn't begin a journey without a destination, right?  So don't begin an analytics project without a destination, a specific goal, in mind.  Otherwise, you will wander about aimlessly for a time in the sea of data before your project is cancelled.  Instead, figure out what the end, the purpose, of your analytics will be, and then figure out how to concretely get there.

Before you start, begin with the end.


Thursday, February 18, 2016

Newcomb’s Paradox: The ‘No-Box’ Solution (1)

Below is my final paper for a graduate philosophy course on paradoxes.  Rather than let it sit on my hard drive for no one to read, I have placed it here (for a few people to read, perhaps, maybe). It was originally submitted May 7, 2012.  I have left the paper as I submitted it (adjusted as needed for a blog format).
--------------------------------------------------------------------------------------------------------------------------

Introduction


While there are many versions of Newcomb’s Paradox, a typical bare formulation is along the following lines: there are two boxes in front of you.  One, a transparent box, has $10,000 in it that you can see, and the other, an opaque box, has either $0 or $1,000,000 in it.  You can choose to take both boxes or you can choose to take only the opaque box.  However, a very successful Predictor has made a prediction as to whether you will take the opaque box only (one-box) or both boxes (two-box).  In accordance with its prediction, it has placed $0 in the opaque box if it has predicted that you will two-box, or it has placed $1,000,000 in the opaque box if it has predicted that you will one-box.  You know all of this information.  Do you one-box or two-box, that is, do you take the opaque box only, or do you take both boxes?  [1]

Initially, it may seem obvious that you should one-box.  If the Predictor is highly successful, it will accurately predict what you do.  If you one-box, the Predictor will have placed $1,000,000 in the opaque box and you will receive $1,000,000.  If you two-box, the Predictor will have predicted this and you will only get $10,000 because the opaque box will be empty.  Knowing that $1,000,000 is more than $10,000, it seems the rational thing to do is to one-box.  However, further reflection leads us to doubt this conclusion.  After all, the Predictor has already made its decision and placed (or not placed) money in the opaque box accordingly.  Whatever choice you make, it appears that you cannot now influence whatever the Predictor did.  If the Predictor placed $0 in the opaque box, you should take both boxes and so gain the visible $10,000.  If the Predictor placed $1,000,000 in the opaque box, you should still take both boxes, and so gain $1,000,000 in addition to the visible $10,000.  So either way, you should take both boxes, and therefore, the rational thing to do is to two-box.

Thus, we appear to have a conflict of rational principles.  On the one hand, we can calculate the expected utility of one-boxing versus two-boxing, and one-boxing yields more expected utility. [2]   Since it is rational to “act so as to maximize the benefit you can expect from your action,” it is rational to pick only the opaque box (Sainsbury 70).  Following this principle of Maximum Expected Utility (MEU), we should one-box.  On the other hand, two-boxing dominates one-boxing.  No matter what the Predictor has done, you will always have $10,000 more if you choose both boxes instead of just the opaque box. [3]   So you should simply two-box, in accordance with the dominance principle: “whenever there is one option that is better than all others regardless of how the relevant variables turn out [you should choose that option]. … [T]hat dominant option is the rational option” (Burgess, “Conditional” 320).  It is thus rational to pick both boxes.  Consequently, it appears to be rational to do opposing things.

In response to this conflict, some philosophers have rejected the problem as a legitimate paradox.  These philosophers advocate a third alternative: no-box.  The no-box solution claims that the problem is ill-formed, being either underdetermined or overdetermined, leading to either ambiguity or internal contradiction.  Consequently, “the problem is… merely a pseudo-problem[,]… needing not to be solved but to be dissolved” (Slezak 279).  As such, no solution is possible and so neither one-boxing nor two-boxing can be supported.  In this paper, I outline these no-boxing arguments and consider possible responses. I conclude that Newcomb’s Paradox is not underdetermined or overdetermined in its most plausible specification, and that, consequently, the ‘no-box’ solution fails.  That is, the paradox is a legitimate, though still puzzling, problem.

Methodology


No-boxers claim that the paradox is either underspecified or overspecified, leading to either indeterminism or incoherency.  In order to assess these claims, we must settle on a methodology regarding our approach to the problem, particularly regarding the underspecification claim.  There are two things to consider here.  First, the paradox’s underspecification is a fault only if it is supposed to be fully specified.  However, perhaps the problem is asking us to consider what it is rational to do in the face of uncertainty.  As Burgess observes, “we all commonly make both decisions and predictions without anything like a complete specification or description of the circumstances involved” (Burgess, “Conditional” 324).  So perhaps we are supposed to make a decision without further specification.  For example, Kavka suggests the possibility that
a generalization could be directly confirmed by a large number of observations, yet not fit into any known theory.  Such a generalization might assert the existence of relationships that are impossible or enormously improbable according to present theory. Or it might entail specific predictions contradicting those of established theory… The strong confirming evidence indicates we should accept the generalization as a genuine causal one on which we can rely for predictions. (Kavka 272-3)
Supposing that we were actually faced with a Newcomb choice in real life, it might be rational to abandon present scientific theory and assume that there is a causal relationship between the chooser’s choice and the Predictor’s prediction.  We would not know what sort of causal relationship this is or how this is possible, but maybe this is precisely the situation that Newcomb’s paradox is asking us to consider.  Consequently, we might agree with Bar-Hillel and Margalit that “though we do not assume a causal relationship, there is no better alternative strategy than to behave as if the relationship was, in fact, causal” (Bar-Hillel and Margalit 302-3).  On this basis, Bar-Hillel and Margalit opt for one-boxing. 

Bach offers a similar approach that disregards the need for specifying the mechanism by which the Predictor makes its prediction.  He notices that any argument for one-boxing, whatever its logical merits, will likely lead you to $1,000,000.  Similarly, any argument for two-boxing, whatever its logical merits, will likely lead you to $10,000.  These are based on uncontroversial facts: those who one-box almost always get $1,000,000 while those who two-box almost always get $10,000.  And “if there is any argument for [two-boxing] that has reliably led to a payoff of $1M + $[10]K, you have no idea what it is” (Bach 414).  Consequently, one should one-box.  So there is no need to further specify the problem, for we can answer the problem as it stands.   In fact specifying the problem further may change the problem to be solved.  So we must first ask: are we permitted to specify the problem further, where necessary?

I do think that we are permitted to further specify the problem.  Bach may be right that we do not need to further specify the problem in order to make a rational decision.  However, the problem does not explicitly forbid us from further specifying the problem or pondering the nature of this situation. [4]  In fact, the problem, as a philosophical paradox, seems to invite us to consider how this situation is possible.  We are supposed to engage in deep reflection upon it, and such reflection is unavoidable.  We cannot help ruling out certain possibilities (e.g., magic) if we take this problem seriously, so implicitly we are already further specifying the problem.  Thus, we must be permitted to do so.

Also, I think we must further specify the problem.  Perhaps the only reasonable specifications will lead us to one-box, in which case, no harm has been done by further specifying the scenario; the specification will have been superfluous.  However, legitimate specifications may lead us to differing conclusions, in which case, how we specify the problem will be extremely important.  Furthermore, no-boxers take the problem to contain or entail an internal inconsistency or an unreasonable assumption that is hidden by a refusal to further specify the problem.  In order to show that the problem does not contain an internal inconsistency and is not merely a science-fictional fantasy, one must attempt to specify the problem in a consistent way. [5]   Doing so will either prove that the problem is a pseudo problem, or it will vindicate those who defend the paradox’s legitimacy.

Having established that we are permitted to and must further specify the problem where necessary, we must consider the second issue of how to specify the problem.  That is, we must ask: when and how are we permitted to further specify the problem?  What constitutes a legitimate or charitable specification?  Some specifications are clearly not legitimate or charitable.  For example, one could say that it is irrational to two-box because the paradox does not say that the transparent box is not hooked up to a nuclear device that will destroy a major U.S. city.  Without ruling out that possibility, we cannot say that it is rational to two-box.  Similarly, if one were to only take the opaque box, perhaps a 300 pound anvil would fall on your head and kill you, so it is irrational to one-box as well because the problem does not say that this is not a possibility.  Thus, one can easily hypothesize wacky scenarios that yield one result or the other and perhaps neither.

In order to rule out these and other illegitimate specifications, we must provide criteria for legitimate or charitable specifications.  I believe that the relevant alternatives approach to skepticism in epistemology is a fruitful starting place.  We may take a specification to be legitimate and charitable only if it presents a possibility that is relevant.  A relevant possibility is one that we have a reason for believing to be true.   This means that the possible specification should only be considered if it is explicitly suggested or necessarily (or perhaps reasonably) implied by the context or bare formulation of the problem.  Another way of saying this is that the specification must represent a possible world that could charitably be taken to be the actual world (or a world very similar to the actual world).  Thus, only charitable and reasonable specifications should be considered as potentially posing a problem for (or solution to) the paradox.  If only one reasonable specification survives our discussion, then the solution it proposes should be the solution to the problem.  If none or many survive (and these lead to different conclusions), then the paradox has no general solution.

The Bare Formulation


Before turning to the arguments for and against no-boxing, we must understand the essential bare conditions that constitute the paradox in its most general form.  These are the following:

1.  There are two boxes: the transparent box will have $10,000 in it that will be visible to you; the opaque box will have $0 or $1,000,000 in it that will not be visible to you.
1*.  There are two boxes: the transparent box has $10,000 in it that is visible to you; the opaque box has $0 or $1,000,000 in it that is not visible to you. 
2.   You can choose both boxes or only the opaque box.
3.  If the Predictor predicts that you will take the opaque box, it will place $1,000,000 in the opaque box; if it predicts that you will take both boxes, it will    place $0 in the opaque box.
3*.  If the Predictor predicted that you will take the opaque box, it placed $1,000,000 in the opaque box; if it predicted that you will take both boxes, it placed $0 in the    opaque box.
4.  The Predictor is highly successful in making predictions.

Notice that the bare formulation already contains an ambiguity, implicit in the literature, highlighted by the use of 1 and 1*, and 3 and 3*.  The ambiguity pertains to when the Predictor makes its prediction: does it make its prediction before or after you are presented with the choice?  1 and 3 correspond to the situation in which you are confronted with the choice before the Predictor has made its prediction.  1* and 3* correspond to the situation in which you are confronted with the choice after the Predictor has made its choice. 
Why does this matter? Most versions of the problem in the literature explicitly assume that the Predictor has already made its prediction when you are presented with the choice.  So the contents of the boxes are already fixed.  However, many one-boxing arguments implicitly assume to the contrary that the prediction will take place after you have been presented with the choice.  This leads to disagreement because whether you should one-box or two-box greatly depends upon when you are confronted with the choice with respect to the Predictor’s prediction.

Burgess, to his credit, is one of the first to notice the two stage nature of the problem.  He writes that “the first stage covers the period before the Predictor has gained the information required for his prediction, the second stage covers the period beyond” (Burgess, “Unqualified” 262).  In accordance with the two-stage nature of the problem, Burgess argues that you should two-box in the second stage but one-box in the first-stage.  In the second stage, the prediction has already been made and so you cannot now influence the contents of the boxes, so you should take both boxes.  However, in the first stage, the prediction has not been made yet, and you can influence the Predictor, the prediction, and the contents of the boxes. [6]   For example, if you can make the Predictor believe that you will one-box, then the Predictor will put $1,000,000 in the opaque box.  (Burgess, “Unqualified” 280).  Since the Predictor is unlikely to be fooled, you should fully commit to one-boxing and therefore one-box when you get to the second stage.

Burgess believes we are all in the first stage.  He writes that “we regular earthlings can all consider ourselves to be in the first stage already. Sad though it is to say, it seems doubtful that many of us will be fortunate enough to … proceed to the second stage” (Burgess, “Unqualified” 280).  However, he does not argue for this conclusion, and in fact, most formulations of the problem suggest that you are actually in the second stage when you are confronted with the choice. [7]   If this is true, then you probably cannot influence the prediction or the contents of the box, as many no-boxers and two-boxers typically claim, and so it is probably rational to two-box. [8]   Indeed, Burgess himself claims that “if you are in the second stage you should two-box” (Burgess, “Unqualified” 280).  He writes more fully:
When in the second stage,… there is nothing that you can do to affect your [brainscan]. Deciding to one-box would not make your [brainscan] that of a one-boxer, and nor would it make the $[1,000,000] somehow materialize in the opaque box. In that stage, two-boxing is a dominant option and also therefore the rational option. The rationality of two-boxing is also reflected in the fact that when the conditional expected payoff values are calculated correctly, that for two-boxing is $[10,000] greater than that for one-boxing throughout the deliberation process. So again: the dominance principle and the conditional expected payoff values are in perfect harmony; both confirm that two-boxing is the rational option. (Burgess, “Conditional” 336)

 Thus, the problem as usually formulated uses 1* and 3* instead of 1 and 3, and so I will use 1* and 3* and assume that your choice is made in the second stage.

The bare essentials of the paradox involve 1*, 2, 3*, and 4.  Yet this information does not seem to be enough to make an informed and rational decision to one-box or two-box.  Importantly, we do not know (a) how accurate the Predictor is, (b) how often the Predictor has made similar predictions, nor (c) how the Predictor makes its predictions.  We shall take each of these in turn.

 

Continued: Part 2


Newcomb’s Paradox: The ‘No-Box’ Solution (2)


Continued From: Part 1

The Predictor: (a) How Accurate is the Predictor?


We know that the Predictor is “highly successful” or “reliable,” but how successful or reliable the Predictor is may make a difference in what we choose to do.  For those who are fairly risk averse, only a Predictor with an extremely high reliability will convince them to choose the opaque box and to leave the visible $10,000 on the table. In fact, if someone is desperate enough and must have $10,000, only a 100 percent reliable Predictor (and perhaps not even then) could convince this person to one-box. [9]   Thus, a risk-averse person would likely two-box.  Conversely, for those who do not mind taking a risk, a lower reliability may be tolerable enough to one-box.  Thus, there appears to be no single rational answer as to what someone should do when confronted with this choice.  Unless we further specify the problem to fix the motivations and desires of the person choosing (i.e., you), and perhaps fix the reliability of the Predictor to a specific degree, we cannot determine which choice is strictly the rational choice.  We would be left with a conditional solution at best: you should one-box if you are in circumstances x, and you should two-box if you are in circumstances not-x.

Interestingly, Burgess notices this problem, and in effect, concedes it.  Taking the Predictor to be fallible, he writes that
By committing yourself to one-boxing you deny yourself the chance, however small, of walking away with $1,001,000. Given that committing yourself to one-boxing practically guarantees that you will gain $1[million], this fact is not likely to concern many people. Still, there may be some strange souls who would be concerned. Moreover, because of the possibility of such people, it should be acknowledged that this strategy for the first stage will not necessarily be rational for everyone. Consider the case of someone who, however perverse it may seem, is extremely keen to gain $[1,010,000] rather than 'merely' $1[million]. If sufficiently keen, he would presumably be willing to put the potential $1[million] at risk in order to gain the $[1,010,000]. For such a character, a different sort of strategy in the first stage would be rational… Again, such characters are presumably very rare. The strategy of committing yourself to one-boxing while in the first stage may still therefore be said to be rational for the vast majority of people. (Burgess, “Unqualified” 282)
By writing this, Burgess concedes that there is no single rational solution that will apply to everyone, for what is rational will depend on the chooser’s desires and goals.  Perhaps this is not such a problem for Burgess if such people are rare and “perverse.”  But such people may be fairly common.  For example, many people are in debt, and a guaranteed $10,000 would cover their debt completely.  In such a case, it seems rational to take the $10,000 instead of risking the possibility of getting $0.  Or perhaps someone has a debt of $1,010,000.  If anything less than this amount will lead to financial ruin, then it is rational to two-box and hope that the Predictor has made a mistake.  Thus, it may not be as commonly rational to one-box as Burgess thinks, and if not, his argument is not as strong as he needs it to be.

In order to address these issues, we need to revise condition 4 to specify the Predictor’s reliability.  For simplicity’s sake, we should choose between the following:
4*. The Predictor is highly, but not perfectly, reliable.
 4**.  The Predictor is perfectly reliable.
I think that 4* is the more interesting scenario.  If the Predictor is perfectly reliable, then most people agree that you should one-box.  In this case, there are only two actual possibilities [10] : either you one-box and gain $1,000,000 or you two-box and gain $10,000, and so you should one-box.  Furthermore, a scenario in which the Predictor is perfectly accurate is highly implausible and takes away from the believability that such a situation is actually possible.  So 4* [11]  seems to be a more charitable interpretation of the paradox.

However, 4* itself contains an ambiguity that needs to be discussed.  The predictor can be highly reliable in different ways, and some of these are not legitimate specifications of the problem.  Suppose that 90 percent of the population will choose two-boxing. The Predictor can simply predict two-boxing every time and maintain a 90 percent reliability.  But this would mean that the Predictor will be 100 percent reliable when a person chooses two-boxing and 0 percent reliable when a person chooses one-boxing.  This is not very impressive, and it is also doubtful that the population of choosers would opt for one choice over the other to such a degree that would make this possible.  Instead, a better interpretation of the problem is that the Predictor is highly reliable with respect to both choices, that is, it makes a very reliable prediction when choosers choose to one-box and a very reliable prediction when choosers choose to two-box.  So 4* should be modified in the following way:
4***. The Predictor is highly, but not perfectly, reliable, with respect to both one-boxing and two-boxing choices.
As such, the Predictor’s prediction is sensitive to facts about you individually, and not just facts about choosers in general.

We also need to add another condition to the basic formulation in order to address your aim in making your decision:
5.  Your choice to one-box or two-box is based on your desire to maximize the money you acquire. [12]
This will rule out taking the $10,000 simply because it is guaranteed.  Instead, you are making a choice with the purpose of achieving the maximum payout you think is possible. 

However, 5 is not an adequate specification of the your desires and motivations.  We are assuming that such desires and motivations will determine the choice that you make.  But there are two problems with this.  First, you may not be determined by your desires and motivations to act.  Supposing that you have libertarian freedom, you are free to act contrary to your desires.  However, for the purposes of the problem, I do not think we need to take a stance on whether you are determined to act or if there is indeterminacy in your action.  We have already assumed that the Predictor is not perfectly reliable.  The error in its prediction can be attributed either to a flaw in the Predictor (e.g., lack of comprehensive information about you) or indeterminacy in your choice.  Either way, it is still the case that the Predictor is highly reliable; it rarely makes a mistake.  Whether this rare mistake is through its own fault or because you are only heavily influenced, but not determined, by your desires and motivations is something we need not take a stance on.  Second, the time frame in which you make the decision is unspecified.  Do you make a decision based on your gut reaction, or are you permitted to deliberate about the answer?  Because gut reactions are not likely to be well considered, permitting you to deliberate about your choice will increase the likelihood that you reason rationally.  Thus, 5 can be revised to be:
5*. Your choice to one-box or two-box is based on (i) your desire to maximize the    money you acquire and (ii) based on a long process of reflective deliberation.

With 1*, 2, 3*, 4***, and 5*, we can turn to the next issue.


The Predictor: (b) How often has the Predictor made a prediction in the past? 


 If the Predictor is reliable but has only made a few predictions, it may be a matter of luck that the Predictor has usually been correct.  As the number of previous predictions increases, the possibility that the Predictor is simply lucky decreases such that when hundreds and thousands of previous predictions have been made with great reliability, one can be confident that the prediction is not based on luck.  It seems that the original intention of the problem is that the prediction is not lucky; a lucky correct answer is a guess, not a prediction.  Furthermore, if it were lucky, then it seems that one should obviously two-box because the Predictor has a 50-50 chance of guessing correctly (though it has been extremely lucky so far), and the two-boxing solution not only dominates but yields more expected utility in this circumstance.  Thus, it seems we should specify the problem such that a sufficient number of trials have previously been run so as to convince you that the Predictor is not merely lucky.  So we add this condition:
 6. You know that the Predictor has made a sufficient number of previous predictions   so as to convince you that its predictions are not a matter of luck.
But if the prediction is not a matter of luck, then how is it made?  We turn now to the final and most controversial and crucial aspect of the paradox.

The Predictor: (c) What is the method by which the Predictor makes a prediction?


 The original formulation of the problem does not say how the Predictor makes its prediction, and yet, many people take the “apparent link between one’s choice and the previously determined contents of the second box” to be the “central feature” and cause of the paradox (Slezak 281).  Many possibilities have been offered, of which I consider the main three: (i) backward causation, (ii) trickery, and (iii) informed prediction.

 Backward causation, while a logical (and perhaps physical) possibility, does not seem to be an appropriate specification of the predictive method.  If your decision backwardly caused the Predictor’s prediction, then the Predictor’s prediction is not really a prediction but a report of what actually occurred (though in the future). [13]   Furthermore, since we already specified the problem such that the Predictor is not infallible, backward causation cannot explain the Predictor’s success because your choice would always infallibly cause the correct prediction. [14]   Finally, backward causation is not a reasonable or relevant possibility in the actual world.  The natural laws of the actual world do not include exceptions for backward causation.  Thus, backward causation is not a reasonable specification of the predictive method.

 Mackie suggests that trickery may be involved (Mackie 217).  The scenario is rigged, perhaps so that when you take both boxes, a trap door on the bottom of the opaque box is released and the $1,000,000 disappears before you can open the box.  But this and other similar scenarios involving trickery do not seem to be reasonable interpretations of the problem.  If there were trickery, then this is not really a philosophical paradox at all, but a carnival game, and not worthy of philosophical discussion. [15]  We also have no reason to assume that there is any trickery going on, for it is not suggested by the bare formulation of the problem and it is contrary to the spirit of the problem.  Similar criticisms can be made against Mackie’s suggestion that a hypnotist hypnotizes you to make one choice or the other (Mackie 218).  This also is not a reasonable interpretation of the problem.  There is no real prediction here, nor is there a real choice, for you are being controlled to do one thing or another.  Also unreasonable is Mackie’s suggestion that the Predictor might be clairvoyant (Mackie 221).  It is doubtful that such a power exists in the actual world, and even if it did, it would be so unusual that you would not be justified in taking the Predictor to have that power.  Thus, trickery and anything resembling trickery must be excluded as a genuine interpretation of the problem because they are not reasonable, relevant, or charitable specifications of the problem.

This leads us to the third, and most plausible, explanation for the Predictor’s success: an informed prediction or common cause approach.  Mackie suggests that the Predictor could be a psychologist that can tell how you will reason and so can predict what you will choose. [16]   Along similar lines, Bach supposes that the Predictor, “gathers detailed information about you and plugs it into a high-powered psychological theory” (Bach 410).  Sainsbury claims that “the Predictor bases his decision on general laws, together with particular past facts.  These might all be physical, or they might be psychological” (Sainsbury 75).  Finally, Burgess believes that the Predictor has information that is gained through the use of a brainscan: “the Predictor uses this information as a basis for his prediction, and then uses this prediction to decide whether to place the $1[million] in the opaque box.  Your [brainstate]… can therefore be regarded as a common cause of both your decision and the [Predictor’s] decision” (Burgess 326).  In all of these explanations, the Predictor gathers information about you (e.g., your desires, motivations, tendencies, beliefs) and makes a prediction based on this information.  Since your decision will also be (largely) based on your desires, motivations, tendencies, beliefs, etc., you will likely make the choice that the Predictor predicted you would make. 

I agree with Burgess that the common cause or informed prediction explanation is the only “realistic alternative” that is true to the intentions of the paradox (Burgess, “Conditional” 329).  First, there does not seem to be a realistic alternative to explain how the Predictor makes its predictions, as we have already seen.  Second, such a method does appear to be realistic in that it is possible in the actual world as we know it.  While psychological theories are not yet accurate enough to make extremely reliable and informed predictions about individuals, it is not implausible that in the future, computers with vast amounts of personal data about you will be able to make such predictions.  Even now, psychologists and economists can make modest predictions about individual behavior. [17]   Thus, we should take the Predictor to make its predictions on this basis.  We can therefore add a final condition:

7. The Predictor made its prediction using extremely detailed information about you (e.g., your desires, motivations, tendencies, beliefs) as a basis.

Summary so Far:


 This concludes my specification of the problem. [18]   To recap, I believe Newcomb’s Paradox to involve the following seven conditions:
1*. There are two boxes: the transparent box has $10,000 in it that is visible to you; the opaque box has $0 or $1,000,000 in it that is not visible to you. 
2.   You can choose both boxes or only the opaque box.  
3*. If the Predictor predicted that you will take the opaque box, it placed $1,000,000 in the opaque box; if it predicted that you will take both boxes, it placed $0 in the opaque box. 
4***.   The Predictor is highly, but not perfectly, reliable, with respect to both one-boxing and two-boxing choices. 
5*.  Your choice to one-box or two-box is based on (i) your desire to maximize the    money you acquire and (ii) based on a long process of reflective deliberation. 
6.  You know that the Predictor has made a sufficient number of previous predictions   so as to convince you that its predictions are not a matter of luck. 
7.  The Predictor made its prediction using extremely detailed information     about you (e.g., your desires, motivations, tendencies, beliefs) as a basis.

Is this set of conditions sufficient to determine a general solution?  Is it consistent so as to avoid contradiction or impossibility?  We turn now to the no-boxing responses to the paradox.

Continued: Part 3

Newcomb’s Paradox: The ‘No-Box’ Solution (3)


 Continued From: Part 2

No-Box Solutions: (i) Underdetermined


 No-boxers claim that the paradox is either (i) underdetermined or (ii) overdetermined.  We shall start with the first claim.  (1) Kavka raises the worry that, since the problem does not explicitly state that the Predictor bases its prediction on a causal relationship, it may be an accidental relationship (Kavka 273).  But whether it is accidental or causal will affect our use of possible worlds in order to determine which possible worlds are closest to the actual world, and hence, which possibilities are relevant to our assessment of outcomes. [19]  For example, “likeness of causal laws generally takes precedence over likeness of particular events.  But it is highly doubtful that likeness of coincidental correlations takes precedence over likeness of particular events” (Kavka 274).  Thus, we will use one set of possible worlds to evaluate modality if the relationship is causal and another set if the relationship is accidental.  Since the problem does not tell us the nature of the relationship, we cannot choose either set and so the problem is underdetermined.  However, for reasons already stated, it seems the problem suggests that we should regard the problem as being non-accidental, and hence, as having a causal relationship involved along the lines of the informed prediction.  So Kavka’s skeptical worry is initially met. 

 Kavka raises a further objection, arguing that, even if the relationship is causal, the same background conditions that have existed in prior predictions must continue to be present in order to support this continuing causal relationship in our current decision.  “Not knowing how causation operates in the game, can we safely make this assumption?” (Kavka 273).  Kavka clearly believes that we are not, and so the problem remains underdetermined.  However, we have already specified the causal relationship between your choice and the Predictor’s prediction, that is, both are caused by your desires, motivations, tendencies, and beliefs.  Furthermore, the problem does not invite us to assume that you are in a relevantly different situation from previous trials.  Thus, every necessary background condition and causal relationship may reasonably be held to be fixed from previous trials.  Of course there is the skeptical possibility that the causal laws or background conditions may suddenly change or be very different in some possible worlds.  But given that we are assessing this problem in the actual world (or in possible worlds very close to the actual world), and since the specification does not depend on any unusual background conditions or causal laws, we should take for granted that the laws of nature and required background conditions will continue to hold.  Thus, I do not think that Kavka’s worries are reasonable or charitable.

 (2) Maitzen and Wilson believe that the problem is “ill-formed” and underdetermined in such a way that the underdetermination “blocks the very set-up of the problem[,…] regardless of variations in such things as the Predictor’s degree of reliability, the basis on which the prediction is made, or the amount of money in each box” (Maitzen and Wilson 151).  The problem involves a “hidden” and “vicious” [20]  regress that makes it impossible for anyone to even understand the problem, much less solve it.  The regress is as follows.  When presented with the problem, you can be asked: “how many boxes will you take?”  You may respond that it depends on the circumstances.  Which circumstances? “The answer, of course, is ‘circumstances in which you believe that the opaque box’s contents depend on the Predictor’s prediction of how many boxes you will take.’” (Maitzen and Wilson 153).  But what circumstances are those?  Well, ‘how many boxes you will take’ itself depends on what you believe the Predictor to have predicted about how many boxes you will take.  Thus, the circumstances that determine how many boxes you will take are those ‘circumstances in which you believe that the opaque box’s contents depend on the Predictor’s prediction of how many boxes you will take in circumstances in which…’ and so on ad infinitum.  Consequently, we have an endless regress. 

One can trivially respond that the contents of the box do not depend on the Predictor’s prediction, and so choose two-boxing.  But then “Newcomb’s problem has a trivial two-box solution which deprives the problem of any interest” (Maitzen and Wilson 154-5).  The contents of the box are not fixed randomly, as we have already argued.  Indeed, the problem explicitly declares that the contents are fixed by the Predictor in accordance with its prediction, so the trivial two-box response does not work.  And so we are left with a regress.  Thus, “the circumstances of a Newcomb’s choice turn out to be impossible to describe in finitely many words. Since none of us can understand an infinitely long description, none of us can understand the circumstances which allegedly define a Newcomb’s choice” (Maitzen and Wilson 154-5).

I believe that one can respond by refusing to accept their infinite and self-referential description of the circumstances as legitimate.  So long as there is a finite and non-self-referential way of describing and specifying the circumstances, their objection fails.  I believe that the way the problem has been specified does meet this criterion.  This specification is finite and does not involve any self-reference, and so it is fully understandable.  When asked about the circumstances of your choice, you can point to the criteria as giving the circumstances involving your choice.  On the basis of this specification, you can appeal to either the dominance principle or calculations of expected utility [21]  to justify your decision one way or the other and thereby make a (debatably) rational choice. Maitzen and Wilson can choose to describe the circumstances in a way that leads to a regress, but the problem need not be so described, and so it need not be incomprehensible or underdetermined in this way.  Consequently, Maitzen and Wilson’s argument fails.

 

No-Box Solutions: (ii) Overdetermined


 Having reviewed and rejected the underdetermined no-boxing solutions, I turn now to the overdetermined responses.  (1) Slezak takes the paradox to be “perfectly clear, fully specified but formally paradoxical” (Slezak 285).  However, he believes that the logical features of the paradox are overspecified and lead to a contradiction.  He writes, “it is not that we can not understand the circumstances presupposed in the problem; rather, when we do understand them properly we recognize the logical incoherence of the problem and the pointlessness of the choice” (Slezak 295).  Slezak locates the source of the paradox in the self-referential nature of the choice.  As such, it is similar in structure to the liar paradox and leads to a contradiction (Slezak 295, 296).  When faced with the choice, Slezak believes we have the propositions
 (x)  I choose (a)
 (y) The Predictor predicts (b)
where (a) is my decision to one-box or two-box and (b) is the Predictor’s prediction of one-boxing or two-boxing.    Since I hope to outsmart the Predictor, I two-box and hope that the Predictor predicted one-boxing.  This means that I am choosing the opposite of what the Predictor predicted, or
 (x) I choose ~(y)
However, since the Predictor is not wrong in its predictions, the Predictor predicts whatever I choose, or
 (y) The Predictor predicts (x)
So we can substitute and get
 (x)  I choose ~(The Predictor predicts(x))
Since the Predictor is going to get the correct prediction, we can substitute (The Predictor predicts (x)) for (x) and get
 (x) I choose ~(x)
which means “I choose the opposite of whatever I choose” (Slezak 296).  Since this is impossible, Slezak believes that the problem involves an internal contradiction.  As he concludes, “the [Predictor] acts as an intermediary serving to externalize what is, in fact, a loop in one’s attempt to second-guess one’s self… [T]he [Predictor]… only extends the loop and does not essentially alter the self-contradictory nature of the decision problem” (Slezak 297).

 I believe that Slezak’s argument can be resisted in several places.  First, the argument relies on the claim that the Predictor is always right, and therefore, its prediction is equivalent to your choice.  However, the Predictor’s prediction is not equivalent to your choice.  The Predictor can get the prediction wrong.  Slezak’s argument requires the Predictor to be perfectly reliable, and we have already argued that this is not a charitable interpretation of the problem.  Second, suppose that the Predictor is perfectly reliable.  The argument only shows that it is impossible to choose the opposite of what the Predictor predicts, and so you cannot outsmart the Predictor.  There is no contradiction if the Predictor and the chooser are in sync.  The argument leads to the simple conclusion that “I choose whatever the Predictor predicts” or the tautology that “I choose whatever I choose.”  These are not problematic at all.

Third, the argument itself contains an internal inconsistency.  Slezak assumes in one step that you choose the opposite of whatever the Predictor predicted, meaning that you could make the Predictor’s prediction false.  But in the next step, he assumes that you did not (indeed, cannot) make the Predictor’s prediction false.  Now if the Predictor is not perfectly reliable, then both situations are possible, but not at the same time.  They are mutually exclusive premises and so must be conditional if they are used in the same argument.  However, in Slezak’s argument they are not conditional and so it is no wonder that his argument leads to contradiction, for the argument is structurally contradictory.  Thus I conclude that Slezak’s argument fails.

(2) Priest takes the Newcomb problem to be a rational dilemma in which one is rationally required to do incompatible things.  In this case, this means that “you ought to choose one box, and that you ought to choose both boxes” (Priest 13).  He argues that if you choose just the opaque box, then you get whatever is now in the opaque box.  If you choose both boxes, then you get whatever is in the opaque box and the extra $10,000.  Since choosing both boxes dominates choosing just the opaque box, it is rational to two-box.  However, if you choose both boxes, then the Predictor knew that you were going to choose both boxes.  So there is $10,000 in the clear box and nothing in the opaque box, and you will get $10,000.  If instead you choose just the opaque box, then the Predictor knew that you were going to one-box.  So there is $1,000,000  in the opaque box, which is what you will get.  Since $1,000,000 > $10,000, you should one-box (Priest 13).  Consequently, “one way or the other, one is going to be rationally damned. Ex hypothesi, rationality gives no guidance on the matter - or rather, it gives too much, which comes to the same thing” (Priest 15). 

One can respond to Priest in several ways.  First, we might agree with Priest that rationality initially recommends two contradictory strategies.  But this may be due to the fact that rationality is a cluster concept that is “equally associated with both the evidential and the causal criteria, since, in the formation of that notion, circumstances in which their dictates would diverge were not anticipated. Thus, what to say about such circumstances is not determined by our present idea and involves some extension of it” (Horwich 443).  Horwich believes that one can resolve the conflict within the concept of rationality and choose one principle over the other as being the most rational principle to follow.  He argues that evidential theory (MEU) is the “more plausible candidate” compared to causal decision theory (and the dominance principle) (Horwich 443-4). [22]    Whether he is correct to prefer evidential theory to causal decision theory is a matter for another paper.  The point here is that one need not take rationality to be inherently contradictory.  Instead, while both principles are rational to some degree, one principle may be more rational than the other, and hence, it is the principle one rationally ought to follow.  Thus, there is no rational dilemma.

Second, what we should do may depend on which of these conditionals are true.  But Priest’s argument relies on taking them all to be true at the same time.  As Burgess notices, one of Priest’s conditionals implies that your psychological state was that of a one-boxer, while the other implies that your psychological state was that of a two-boxer (Burgess, “Conditional” 333).  But one cannot have both a one-boxing and a two-boxing psychological state at the same time.  Consequently, “it must not be imagined that if you one-box you will become rich (because your [brainscan] was that of a one-boxer), while also imagining that if you two-box you will remain poor (because your [brainscan] was that of a two-boxer)” (Burgess, “Conditional” 333). [23]  Since not all of these conditionals can be true at the same time, and since this is necessary to generate the rational dilemma, Priest’s argument fails. 

(3) Mackie’s article surveys many possible specifications and determines that some specifications lead to one-boxing while others lead to two-boxing, yet all diverge “in one way or another from what it is natural to take as the intended specifications” (Mackie 222).  Of the one-boxing solutions, these rely on trickery, backward causation, repeated plays, or “a choice not about what to take on a particular occasion but about what sort of character to cultivate in advance” (Mackie 222).  We have already rejected trickery and backward causation as being uncharitable specifications of the problem.  Since you only have one opportunity to make this choice, the repeated plays scenario can also be rejected.  And because you are in the second stage of the problem, it is too late to cultivate a one-boxing psychology in order to influence the Predictor’s prediction.  Thus, “these situations are all off-colour in some respect” (Mackie 222).  However, the two-boxing solutions are also “off-colour.”  Mackie claims that solutions that recommend two-boxing are cases “where the player does not really have an open choice…, or where the seer does not really have predictive powers, and his past successes must be set aside as coincidences” (Mackie 222).  Consequently,
[t]here is no conceivable kind of situation that satisfies at once the whole of what it is natural to take as the intended specification of the paradox… While the bare bones of the formulation of the paradox are conceivably satisfiable, what they are intended to suggest is not.  The paradoxical situation, in its intended interpretation, is not merely of a kind that we are most unlikely to encounter; it is of a kind that simply cannot occur. (Mackie 223)
Thus, the paradox is overdetermined and must be rejected as legitimate.

I agree with Mackie that taking the method of prediction to be purely coincidental is illegitimate, and so the Predictor must have genuinely predictive powers.  However, I disagree with Mackie that in this situation the chooser does not really have an open choice, for reasons already stated.  In fact, I find it very plausible that a Predictor, using extremely detailed information about you, could predict whether you one-box or two-box with a high degree of reliability and without relying on luck, even if your actions are not determined.  We already know, through economics, psychology, and sociology, that people’s actions are predictable to a large degree using, among other things, facts about their beliefs, desires, family history, and socioeconomic status.  Libertarian freedom is only in danger if people’s actions are completely predictable.  However, we have already rejected the specification that the Predictor is perfectly reliable and hence, that your choice is perfectly predictable.  All that needs to be true is that your choice is highly predictable based on facts about you, and this seems to be a very plausible possibility.  Thus, I conclude to the contrary that this paradox, in its intended interpretation, is of a kind that can occur, and so Mackie’s argument fails.

Conclusion


 Having shown that every major attempt to dissolve the paradox by arguing that it is either underdetermined or overdetermined has failed, I conclude that the paradox is legitimate.  Its intended specification is clear and understandable and involves no contradiction or serious implausibility.  However, though I believe the paradox to be legitimate, I am still unsure as to the choice one should make.  There is a sense in which both choices are rational and irrational.  The two-boxing choice is rational based on the fact that one cannot now actually change the contents of the box, and so one might as well two-box  And yet two-boxers must squarely face the fact that if they two-box, they will likely only get $10,000.  The Predictor will likely anticipate whatever reasoning they use to make their decision, and so their decision to two-box will just confirm the Predictor’s prediction.  Consequently, they will not maximize the money they obtain.  Similarly, one-boxers must believe that their choice, though not a cause of the prediction, is correlated with it because it derives from the same (e.g., their beliefs, desires, motivations, tendencies).  Your best evidence as to what is now in the opaque box is your decision to one-box or two-box, and a one-boxing decision will be vindicated by finding the $1,000,000 that was there all along, confirming the highly reliable Predictor’s ability to predict your choice.  So in a sense, you do now have some influence over the contents of the box. 
Is this irrational?  Two-boxers will think so.  As Gibbard and Harper claim, “the moral of the paradox [is that if] someone is very good at predicting behavior and rewards predicted irrationality richly, then irrationality will be richly rewarded.” [24]  However, one-boxers will counter that
one can only be amused by those advocates of [two-boxing] who… realize that takers of [both boxes] almost always get but $[10,000] whereas takers of [one box] almost always get $[1,000,000], and proceed to bemoan the fact that rational people do so much worse than irrational ones.  Despite their logical scruples, they seem to have a curiously low standard of what constitutes a good argument, at least in the context of Newcomb’s Problem.  Evidently they would rather be right than rich. (Bach 412)

Which of the two strategies is the most rational I will not argue for here, for I am still unsure myself.  However, I am sure that the paradox simply cannot be dismissed on the grounds that it is underdetermined or overdetermined, and is therefore a pseudo problem.  We find that when the problem is specified in the most charitable and reasonable way, Newcomb’s Paradox is a legitimate paradox, though it is still incredibly paradoxical.  Thus, the ‘no-box’ solution is no solution at all.

Footnotes


[1] (Clark 142)

[2] For example, if the Predictor is accurate 95 percent of the time, then your expected utility for one-boxing is .95*$1,000,000 + .05*$0 = $950,000, which is greater than the expected utility for two-boxing, which is .05*$1,010,000 + .95*$10,000 = $60,000.

[3] That is, you will have $10,000 instead of $0 if the Predictor predicted one-box, and $1,010,000 instead of $1,000,000 if the Predictor predicted two-box.

[4] A variation of the problem in which one is explicitly forbidden to further specify the problem may be interesting to consider in its own right, but I will not pursue that variation here.

[5] Consider Burgess’ response on this issue:  “To the extent that we are scientifically minded we tend to take little interest in a problem if we are simply told to accept that certain aspects of it are essentially inexplicable. On the one hand, if it is acknowledged that the supposedly inexplicable problem is one that could never exist, then questions about that problem are likely to be as interesting and scientific as questions about unicorns, goblins and fairies. And on the other hand, if it is acknowledged that the supposedly inexplicable problem is one that could exist, then we are essentially being told to reject the very assumption that makes the scientific outlook so interesting. For we are being required to renounce the idea that if something can happen, then that something is explicable.  In other words, we are being forced to reject the assumption that if a scenario is possible (even if not with today’s technology), then that scenario could, with sufficient knowledge, be explained.” (Burgess, “Conditional” 330-1).  I do not assume, like Burgess, that if something is possible, then it can be understood by us.  The mind-body problem seems to be, as McGinn argues, an example of an actuality that is beyond comprehensibility.  However, I would not forbid attempts to explain how the mind and the body relate.  Similarly, I do not think that one should be forbidden from exploring the nature of Newcomb’s paradox.

[6] As Burgess explains it, “Because the first stage precedes the brainscan, it spans a time during which you have an opportunity to influence the nature of your [brainscan]. The significance of this can hardly be understated. By influencing the nature of your [brainscan], you can influence the alien's prediction and, in turn, influence whether or not the $1[million] is placed in the opaque box. After you've been brainscanned and have thus entered the second stage, you are no longer in a position to influence whether or not the $1[million] is placed in the opaque box” (Burgess, “Unqualified” 280). 

[7] Consider the SEP entry in the article on causal decision theory: “In Newcomb's Problem an agent may choose either to take an opaque box or to take both the opaque box and a transparent box. The transparent box contains one thousand dollars that the agent plainly sees. The opaque box contains either nothing or one million dollars, depending on a prediction already made. The prediction was about the agent's choice. If the prediction was that the agent will take both boxes, then the opaque box is empty. On the other hand, if the prediction was that the agent will take just the opaque box, then the opaque box contains a million dollars. The prediction is reliable. The agent knows all these features of his decision problem” (Weirich).  Notice the past tense and the explicit assertion that the prediction has already been made when you are confronted with the choice.  Therefore, contrary to Burgess, you are already in the second stage.

[8] McKay writes that “the deposit in the boxes has already happened, and can no longer be affected by you - or by anyone at all… you cannot affect the prior actions of the Predictor” (McKay 187-8).  Maitzen and Wilson also agree (Maitzen and Wilson 157), as does Burgess, who writes that  “after you have been brainscanned and have thus entered the second stage, you are no longer in a position to influence whether or not the $1[million] is placed in the opaque box”  (Burgess, “Conditional” 336).

[9] Thanks to Tanya Kostochka for pointing this out.

[10] (Ahern 486)

[11] Someone may wish to discuss a solution involving 4**.  Again, this may be an interesting variation, but I think it is less true to the original intentions of the paradox.

[12] Many philosophers assume this explicitly in the problem.  For example, Maitzen and Wilson write that “the crucial assumption of Newcomb’s problem [is] that you do wish to maximize your winnings” (Maitzen and Wilson 154).  Priest also claims that your aim in choosing is “to maximize your financial gain” (Priest 12).  As such, I do not take this to be an unfounded addition to the original problem.  If someone asserts that the original formulation is underdetermined precisely because the chooser’s desires and attitude toward risk are unstated, then I think one must concede that the original problem is underdetermined.  However, it is not underdetermined in a very interesting way, since the paradox no longer arises from the formal features of the problem but from an indeterminacy in what “your” desires and attitudes are.  Thus, the “rational” answer will vary from person to person (e.g., if one needs a guaranteed $10,000, it is obviously rational to two-box).  However, such a solution trivializes the problem when it seems that no solution should be trivial,  This further specification prevents a trivial solution and adheres to what I take to be the original, though unstated, intentions of the problem.

[13] This objection also applies to the foreknowledge possibility.  If the Predictor foreknows what will happen or is outside of time and so has already “seen” what will occur in some sense, then the Predictor is not really predicting your choice, but reporting your choice through its distribution of money in the boxes.

[14] Someone may suggest that perhaps the Predictor is infallible, but makes false predictions on purpose, perhaps to give the impression of fallibility.  However, the problem intends us to treat the Predictor as trying its best to predict what you will decide to do, and not as engaging in some elaborate form of trickery, so this is an irrelevant and uncharitable response.

[15] As Burgess writes, “if Newcomb’s problem is presented in the supposedly inexplicable manner of the side-show charlatan it is worth discussing only to the extent that such discussion exposes the fraud” (Burgess, “Conditional” 332).

[16] Mackie worries that this interpretation makes “the question ‘What is it reasonable for the player to do?’ […] idle… Each player will rigidly follow his own characteristic style of reasoning and so do whatever the psychologist-seer has predicted, which may or may not be in accordance with our pseudo-recommendation” (Mackie 219).  That is, determinism makes the very notion of a ‘rational choice’ incoherent, for the agent will simply do whatever he or she is determined to do, and the psychologist will predict this.  However, a libertarian interpretation of free will is open to us, and a suitable notion of choice can be developed by compatibilist or soft determinist approaches to free will.  Furthermore, if the Predictor is not infallible, it is not simply describing what will happen in the future; it is making a genuine prediction that could turn out to be wrong.

[17] Even modest predictions are enough to generate the paradox.  Suppose that the Predictor is only a measly 60 percent reliable.  Then one-boxing still yields more money than two-boxing: .6*$1,000,000 + .4*$0 = $600,000 > $410,000 = .6*$10,000 + .4*$1,010,000.  Such predictive capacities are extremely plausible, and even currently actual in some fields with respect to some questions.

[18] Others may find further specifications necessary, but these specifications are those I take to be legitimate and necessary in order to assess the claims of no-boxers and to determine that the no-boxing solution is not the correct response to the problem.

[19] Phyllis McKay’s objection is similar to Kavka’s and can be answered in the same way.  She claims that since the problem does not specify whether the relationship between your choice and the prediction is causal or not, one cannot choose: “If you still think there must be no causal connection since the action of the Predictor really is in the past, you should two-box. Alternatively, if you think there probably is some cheating going on undetected by you, then you think there probably is a causal connection, and you should one-box” (McKay 188).  Having specified the relationship as involving a common cause, McKay’s criticism no longer applies.

[20] (Burgess, “Conditional” 321)

[21] As Burgess writes, “you simply do not need to predict your own choice… In fact you can calculate conditional expected payoff values for each of your two options even when you have no idea what you will decide to do” (Burgess, “Conditional” 328).

[22] He claims that, first, the causal criterion is not “uniform,” requiring us to divide probabilistic states into parts that are “causally independent of the choice, and those parts… that are not” (Horwich 444).  The evidential criterion is comparatively simple and uniform.  Second, “there are circumstances in which every single one of an agent’s choices will be branded by the causal rule as irrational” (Horwich 445).  Third, the causal theory “embodies an arbitrary time-bias,” insisting that causes temporally precede effects (Horwich 446).  Fourth, and finally, causal theorists are committed to conflicting desires and hopes: “they recommend taking the $[10,000] in Newcomb’s situation, they also recommend attempting to make oneself into the sort of person who will decline it… This means that you simultaneously have a pro-attitude towards the agent’s hoping to X rather than Y, and a pro-attitude towards Y rather than X actually being done” (Horwich 448).

 [23] More fully, “When it is said that the conditional expected payoff value for one-boxing is high, it is implicitly assumed (for reasons of causal coherence and plausibility) that your brainstate at the time of the brainscan was that of a one-boxer. But when it is then said that the conditional expected payoff value for two-boxing is low, it is implicitly assumed (again for reasons of causal coherence and plausibility) that your brainstate at the time of the brainscan was that of a two-boxer.  Either of these two assumptions could be adopted, but no one can consistently adopt both” (Burgess, “Conditional” 333).

[24] Quoted in (Slezak 281). 

Works Cited and Consulted


 Bach, Kent.  “Newcomb’s Problem: The $1,000,000 Solution.”  Canadian Journal of Philosophy
 17.2 (1987): 409-425. JSTOR. Web. 19 Jan. 2012. 

Bar-Hillel, Maya, and Avishai Margalit. “Newcomb’s Paradox Revisited.”  The British Journal
 for the Philosophy of Science 23.4 (1972): 295-304. JSTOR. Web. 19 Jan. 2012.

Burgess, Simon. “Newcomb’s Problem and Its Conditional Evidence: A Common Cause of
 Confusion.” Synthese  184.1 (2010): 319-339. Springer Link. Web. 19 Jan. 2012.

---. “The Newcomb Problem: An Unqualified Resolution.” Synthese 138.2 (2004): 261-287.
 JSTOR. Web. 19 Jan. 2012.

Clark, Michael. Paradoxes from A to Z. 2nd ed. New York: Routledge, 2007. Print.
Horwich, Paul.  “Decision Theory in Light of Newcomb's Problem.” Philosophy of Science 52.3
 (1985): 431-450. JSTOR. Web. 19 Jan. 2012.

Kavka, Gregory.  “What is Newcomb’s Problem About?” American Philosophical Quarterly
 17.4 (1980): 271-280. JSTOR.  Web. 19 Jan. 2012.

Mackie, J.L. “Newcomb’s Paradox and the Direction of Causation.” Canadian Journal of
 Philosophy 7.2 (1977): 213-225. JSTOR. Web. 19 Jan. 2012.

Maitzen, Stephen, and Garnett Wilson. “Newcomb’s Hidden Regress.” Theory and Decision
 54.1 (2003): 151-162. JSTOR. Web. 19 Jan. 2012.

McKay, Phyllis. “Newcomb’s Problem: The Causalists Get Rich.” Analysis 64.2 (2004): 187-
 189. JSTOR. Web. 20 March 2012.

Priest, Graham. “Rational Dilemmas.” Analysis 62.1 (2002): 11-16. JSTOR. Web. 20 March
 2012.

Sainsbury, R.M. Paradoxes. 3rd ed.  New York: Cambridge, 2009. Print.

Slezak, Peter. “Demons, Deceivers and Liars: Newcomb’s Malin GĂ©nie.” Theory and Decision
 61.1 (2006): 277-303. Springer Link. Web. 19 Jan. 2012.

Weirich, Paul. “Causal Decision Theory.” Stanford Encyclopedia of Philosophy. Stanford
 University, 25 Oct. 2008. Web. 23 April 2012.