Gambling with the devil

by Chris Bertram on November 1, 2003

Here’s a nice puzzle, which I was told about over dinner last night. I’m not sure who devised it, though there’s a reference in a paper by Roy Sorensen :

You are in hell and facing an eternity of torment, but the devil offers you a way out, which you can take once and only once at any time from now on. Today, if you ask him to, the devil will toss a fair coin once and if it comes up heads you are free (but if tails then you face eternal torment with no possibility of reprieve). You don’t have to play today, though, because tomorrow the devil will make the deal slightly more favourable to you (and you know this): he’ll toss the coin twice but just one head will free you. The day after, the offer will improve further: 3 tosses with just one head needed. And so on (4 tosses, 5 tosses, ….1000 tosses …) for the rest of time if needed. So, given that the devil will give you better odds on every day after this one, but that you want to escape from hell some time, when should accept his offer?

{ 94 comments }

1

Ayjay 11.01.03 at 1:56 pm

An indication, perhaps, of the ways philosophical inclinations differ: what I find far more interesting than the question of what you should do in that situation is what you would do. If there is any situation in which the thesis that we are rational actors who make rational choices that maximize our well-being must give way, this is surely it. Remember: you are at every instant in torment that exceeds anything you could possibly imagine from your earthly experience. There is no element of your being that is not suffering excruciating agony. Even if the fraction of your ratonal mind that can still function in these circumstances is telling you that you need to wait a few more days before flipping the coin — because, after all, what is a few days in comparison with an eternity of such torment? — will you obey that rational prompting? To me, it would be interesting to know how many people would jump at the coin-flip opportunity on the very first day. . . .

2

markus 11.01.03 at 2:09 pm

St. Petersburg in reverse? By my gut I’d go for 30 days.

3

Jonathan Ichikawa 11.01.03 at 2:13 pm

“Very, very intense torture” is a concept we should be able to deal with in more or less the standard way. “For eternity” is the part that’s likely to trip things up.

Following is what is surely a bad argument for the conclusion that if I’m a rational agent, for any day, it’s not soon enough. Unfortunately, I can’t see what’s wrong with the argument.

Suppose it’s now day k. I could take the chance now, or wait until tomorrow. By choosing to wait until tomorrow, I incur the disutility of an additional day of torture — but I also gain some finite probability of an infinite utility — to leave hell. Therefore, this probability should carry greater weight in a prudential judgement than the finite day of torture, and I should wait another day.

Of course, if this is right, it suggests that we should NEVER take the devil’s offer… and that’s pretty clearly just dumb. I’m not sure what this tells us, other than that this is an interesting question.

For me personally, I think I’d probably go about five days before flipping. I make no claim as to the rationality of that decision.

4

matt 11.01.03 at 3:12 pm

This was also in Van McGee’s paper “An air-tight dutch book” from a few years ago in analysis. It was picked for the philosopher’s annual that year. THe punch line was that if you were completely rational in the economic sense of the term- maximizing your expected utility- you’d spend forever in hell. It’s a great paper.

5

Nicholas Weininger 11.01.03 at 3:34 pm

Math geek here.

Jonathan Ichikawa’s argument that you should never take the offer holds if disutility is linear in the number of days spent in hell, or is some other function of days spent in hell that goes to infinity as the number of days goes to infinity.

But if your total disutility for an eternity in hell is finite– if, for example, you get used to eternal torment a bit more each day, so that your disutility for the kth day in hell is only half (or only 99.99%) of your disutility for the (k-1)st day– then the argument does not hold, and there is a finite flipping point that minimizes expected disutility. Perhaps counterintuitively, I think (this is off-the-top-of-the-head only, correct me if I’m wrong) that in this case the slower you get used to hell, the later you should flip.

6

hans ze beeman 11.01.03 at 3:45 pm

It’s all an infernal sadistic game – there are no fair coins in hell. The roaring laughter of Lucifer can be distantly heard, laughing at your sophistic toils ;)

7

jdsm 11.01.03 at 4:31 pm

More important than the question of whether you are a rational agent is whether you are a moral agent. If an all-good God exists and you are in hell, it is because you deserve to be there. Thus, you should take your eternity of torment with good grace. I certainly would.

Praise the Lord.

8

zaoem 11.01.03 at 4:33 pm

The rational answer to this question is obviously that it depends on your discount factor, your relative utility for being in hell or not, and the extent to which you trust the devil.

9

Seth Edenbaum 11.01.03 at 5:13 pm

To a non philosopher such as myself, it seems that analytic philosophy, individualism, and a preference for economic utility are all tied together, and in a way that makes me cringe. Why the constant need for resolution? The philosophy of the ‘rational actor’ is nothing more than a philosophy of technocracy. People live most of their lives irrationally; why not spend time observing and analyzing that? Why not ‘waste’ time studying Mozart? Why not allow yourself to be amused and intrigued by the aporias? Is there something immoral with accepting that in the most important debates, we have to settle for the resolution of individual cases? Are all lawyers living philosophically incomplete lives?
If curiosity thrives on ambiguity, then the world outside logical analysis is far richer subject than than the one within it. And analytic philosophy seems often to do little more than justify the economic activity of the incurious.

If I could stand the torment- physical, emotional etc.- I would stay in Hell until I got bored- with the weather, the conversation, or with eternity itself, and then I would take the chance.
I’m sorry if this seems rude, but excluding the curiosity one may have about acts judged to be criminal, I can’t understand those who use the restrictions of others to define the limits of their own questions.
It’s illogical.

10

praktike 11.01.03 at 5:19 pm

depends on discount factor, aka preference for pain now versus in the future.

11

scott h. 11.01.03 at 5:40 pm

Mathematically, it wouldn’t take that long. If you wait till the 4th day, you would get 4 flips giving you a 93.75% chance of getting at least one head. At the 12th day, you would have a 99.997% chance. Each extra day would give you only a very slight increase to your chances. (Waiting till the 13th day would increase your chances by 0.001% to 99.998%)

12

scott h. 11.01.03 at 5:52 pm

Then again, at the 13th day, you would have a 1/50,000 chance of losing. If you wanted to get the chances better than 1 in a million, you would have to hold out another week.

13

Rook 11.01.03 at 6:00 pm

Quick frankly, if you’re in Hell it’s too late.

14

Erik 11.01.03 at 6:15 pm

Given what is known about the devil’s personality from published records, it appears he derives particular pleasure from giving earthlings the impression they experienced a great deal before screwing them over in nasty way. Given this, I would not hesitate and immediately go for the coin toss under the assumption that the devil may derive little pleasure from torturing an earthling with such predisposition.

15

Steve 11.01.03 at 6:53 pm

See also Psycho author Robert Bloch‘s classic short story, "That Hell-Bound Train", in which a man sells his soul for the chance to freeze time at the moment of his supreme happiness but can’t ever quite decide when to use it.

He’s a mean bastard, that Satan.

16

Walt Pohl 11.01.03 at 6:58 pm

Seth: To say that someone is a rational actor is not to say people should not spend time on Mozart. It is to say that people do not make mistakes in _judgement_. Rationality does not dictate we prefer cash to art.

17

jdsm 11.01.03 at 7:03 pm

Seth Edenbaum above comments that philosophising about what is rational to do is uninteresting since most people behave irrationally. We’d be better off analysing this instead.

This is a somewhat difficult idea to understand. While he is correct that many decisions are not based on reason, many are. To take a banal example from today, I went to visit my wife in the hospital and also needed to buy some food. The faster route is not past the supermarket so either on the way there or the way back I needed to take the longer route. I chose to go on the way back because I knew the parking restrictions would be lifted by that time.

Life is filled with these kinds of decisions and it’s not some over-analytical, logical lunacy that makes us make them. It’s because it improves our lives.

The rest of his post is just flannel. Philosophy was once called “a peculiarly stubborn attempt to think clearly” (sorry don’t know who by). Mr Edenbaum seems to think this is a waste of time and instead goes for the “believe what you like” approach.

18

wcw 11.01.03 at 8:08 pm

I’d almost certainly take the flip right away. only on day one can you escape torment entirely.

19

Neel Krishnaswami 11.01.03 at 8:14 pm

Matt: did McGee’s paper assume that infinite utilities were involved? If so, then mathematical economists would tend to look askance at him: IIRC they tend to restrict total possible utility to a large-but-finite number. In that case then there is a finite time after which a rational actor would accept the deal.

Of course, even with the finite utility assumption, a Rawlsian actor with maximin preferences could have no preference between staying in Hell and accepting the deal…. (Cf. John Harsanyi’s “Can the Maximin Principle Serve as a Basis for Morality?”)

20

Seth Edenbaum 11.01.03 at 8:22 pm

To Walt Pohl and Jdsm: Art is the articulate, even brilliant, glossing over of conflict and contradiction. To take art seriously, as more than a simple diversion, is to choose what is complex, indirect, intelligent and most often logically wrong, over what is simple, direct and quite often technically speaking, right. This is not practical at least in the short term, but in the long term neither is simplicity. ‘Justice’ is not simple, and is defined in our literature as ‘Imperfect’ Justice, imperfect both because of the third party systems of communication we are forced to use -language etc.- and because we have a tendency to replace logic with art for reasons of simple desire. This means: 1- That we may each ‘desire’ an outcome and -2 We can not even agree on the meaning of words. What is the definition of the color ‘red’. “Well… I think it’s more orange actually” These are limitations to our experience which we will not escape. Art reminds us, indeed demonstrates to us, the subtlety of our perceptions, the subtlety of our ability to bend and twist things beyond recognition. It is a dangerous drug to take by choice, especially since we’re born drunk. However, in my leisure hours, when I am not rushing someone to the hospital or at work -on a construction site- I choose to remind myself of the ambiguities we face by going to museums, listening to Mozart and 50 Cent, and attacking conservative arguments on constitutional law. That way I know that when it comes time to make the important decisions, I will have prepared myself to face them, aware of my limitations, as best I can.
Philosophy is ‘an attempt’ to think clearly. Please don’t confuse that with clarity.

21

cj 11.01.03 at 10:50 pm

Knowing my pain threshold, I would opt-in on day 4 or 5. With only a 1/16-1-32 chance of all tosses coming up tails – I like my odds. Besides, I’d probably be dead shortly thereafter if the heads did not come up.

22

Kieran Healy 11.01.03 at 11:18 pm

Neel -

IIRC they tend to restrict total possible utility to a large-but-finite number. In that case then there is a finite time after which a rational actor would accept the deal.

I think this evades the problem by putting an arbitrary cap on possible utility. The question is when would a rational, maximizing agent choose to take the deal. I don’t know it’s a solution to say “I’m assuming an agent that rationally maximizes — but only up to this much.”

23

jim 11.01.03 at 11:59 pm

Depends: is God/Devil/Hell what I think they are or what Fundamentalists think they are? If Gerry Fallwell is correct, my inner moral sense says to go to hell as a conscientious objector. Such a stupid system can’t last if enough of us refuse to play the game!

24

john c. halasz 11.02.03 at 12:08 am

The proposed problem misses the theological nicety that the first condition of hell, regardless of any other description of deprivations or tortures involved, is the complete absence of hope. Perhaps the example would have more point if one substituted “capitalist” and “proletarian” with uncertain mortality in place of infinite (dis-)utility.

25

jennifer kleiman 11.02.03 at 12:36 am

Dealing with infinities is frequently counterintuitive. If one can experience an infinite amount of suffering by spending eternity in hell, then it is possible to experience an infinite amount of suffering. How much suffering is there per day? The most simple assumption is that each day’s suffering is finite and constant, say alpha. That is the case that leads to the amusing conclusion that one ought to wait forever to make the coinflip, to suffer that same alpha each day in order to improve one’s odds against suffering infinitely. But it’s by no means certain that alpha’s constant. And given the assumption of the possibility of infinite suffering, alpha could indeed be infinite… One might say even a second in hell is like an eternity of suffering. Maybe alpha is countably infinite. If one experiences an infinite amount of suffering each day in hell, then one ought to take the wager on day one and try to escape with zero suffering, otherwise it’s infinities of suffering and whether for one day or for a countable infinity of days won’t matter, it’s still turtles all the way down.

26

Neel Krishnaswami 11.02.03 at 1:54 am

Kieran: I’m afraid I don’t see the arbitrariness. Formally, utility functions are a calculational device inferred from a preference relation over bundles of goods (including possible “states of the world”, to handle uncertainty). Since this means that any monotonic transformation of a utility function is also a valid utility function, you can require utilities to be finite without loss of generality, right?

27

Zizka 11.02.03 at 2:50 am

Seth was basically questioning the value of rational choice theory. How high on the list of valuable ways of thinking should we put rational choice theory? How much does the rigorous formal solution of a variety of hypothetical or trivial (banal) questions on individual choice contribute to human welfare, however defined?

My guess is, not too much. Philosophy was probably defined as “a peculiarly stubborn attempt to think clearly” by some analytic epigone during the last 50 years. It doesn’t sound like Aristotle or Plato, anyway. And you can make your living doing that sort of thing, so obviously it’s the right thing to do.

On the other hand, the movements of formalization and hypotheticalization do make the material easier to handle and much less messy. To me the greatest philosophers were the ones who moved in both directions — inward toward clarity but also outward toward challenges. The movement of today’s philosophy is only in one direction.

28

Abiola Lapite 11.02.03 at 3:01 am

“Maybe alpha is countably infinite”

And maybe not. You seem to be implicitly assuming that time is discrete, but since this is Hell we’re talking about, the normal rules of physics obviously wouldn’t apply (under the assumption that time in our universe is indeed discrete, which isn’t obvious). If time is continous, it could well be that α > ℵ*.

*Where ℵ refers to the cardinality of the integers, of course. The blog comments don’t seem to permit use of the <sub> tag, or I’d have been able to enter the symbol in correctly.

29

Jon H 11.02.03 at 3:31 am

The Devil, of course, will flip the coin – into a bottomless abyss. So it never lands. Doh!

One should never make such bets while on a Death Star, either.

30

Jeremy Osner 11.02.03 at 3:32 am

I’ve thought of another reason why it is better never to take the bait: as long as you do not take it, you have hope — hopelessness is the fullness of torment in hell. As long as you have in mind, “tomorrow I can take my chance and likely get away from this torment”, you are not exposed to the horror of eternal damnation.

2 other points: people have talked about pain threshold and getting used to the constant torment — but hell is hell. You have no pain threshold, and the pain is at every instant as intense as at the first moment you entered hell. And, consider who is tossing the coin — if Satan has promised that you will go free on heads, it is going to come up tails. Still false hope is far far better than none at all, even hope that you rationally know is false.

31

Jeremy Osner 11.02.03 at 3:35 am

It occurs to me that this could also be a component of one’s punishment, to be offered the flip every day knowing that it is in one’s interest never to accept it. There is no cheating the devil.

32

lazyman 11.02.03 at 4:31 am

zizka: in Plato and Aristotle’s lingo, “philosophy” means “love of wisdom” … and it seems pretty clear to me that one who loves wisdom would aim to think clearly. Modern “analytic” philosophy employs different techniques from the ones Plato used, but you’re not going to sell me on the idea that Plato and Aristotle didn’t place a premium on thinking clearly. Fer Xanthippe’s sake, look at Socrates’ life. If that guy — Plato’s mentor and hero — wasn’t all about trying to think clearly, then I don’t know what the hell he was up to.

33

Kieran Healy 11.02.03 at 4:59 am

Neel:

Kieran: I’m afraid I don’t see the arbitrariness. Formally, utility functions are a calculational device inferred from a preference relation over bundles of goods (including possible “states of the world”, to handle uncertainty). Since this means that any monotonic transformation of a utility function is also a valid utility function, you can require utilities to be finite without loss of generality, right?

My point involves theoretical rather than practical rationality, and theoretical rationality is clearly what this example is about. Are you suggesting that to solve this problem it’s not arbitrary to bound an agent’s utility scale? If so, I disagree, because it seems there’s no independent reason (i.e. independent of wanting to avoid the problem for theoretical rationality that the example illustrates) for embracing a bounded utility scale.

34

Zizka 11.02.03 at 5:04 am

Well, first of all, I didn’t say Socrates, and I didn’t say thinking clearly was a bad thing. I said that “thinking clearly” is not an adequate definition of philosophy.

My basic criticism is that in analytic the defining-and-formalizing clarification movement is almost the only one, and that the outward movement toward comprehensiveness and meeting challenges is systematically and aggressively neglected.

In Plato/Socrates I have seen a bit of this problem. There’s an initial phase translated “collection” (in the Statesman, I think) when you gather together all the things that seem sort of the same. followed by the analytic phase when you divide these things into their kinds (ending up with the featherless biped, etc.). I always have thought that Plato/Socrates was far too casual about the “collection” stage (which essentially equals “deciding on the problem”) — as if it were easy or self-evident.

35

Walt Pohl 11.02.03 at 6:31 am

Zizka: You’re missing the point. The devil would only offer this deal to a _believer_ in rational choice theory. Anyone else might flip the coin and get out of hell.

36

mitch 11.02.03 at 8:20 am

For the contemporary audience, Satan should be replaced by a spambot, and “flipping a coin” by “opting out”.

37

Anarch 11.02.03 at 8:55 am

Speaking as a set theorist(-to be, really), I think all this shows is that rules designed to work with finite objects come to a screaming halt when faced with infinite ones. [See, e.g. product topology, product measure... or heck, the rules of cardinal and ordinal arithmetic.] I don’t really see how any form of rational decision theory can meaningfully be transferred to the infinite without serious alteration; and whatever the alteration leads to, it certainly won’t be “rational” in the sense that a rational human actor could (or would) pursue it.

Let me once again recommend Harvey Friedman’s paper Enormous Integers to get some idea of just how damn big infinity is. [And that's just the pitifully small countable infinity there. Perish the thought of trying to figure out how big the smallest admissible ordinal is, or omega-1, or an inaccessible.] Apart from being fun in its own right, it might put some of these questions on the infinite into perspective.

38

schnauze 11.02.03 at 12:11 pm

It may be a little late to ask a (largely) irrelevant question…but after I read this post the first time it got me to thinking about a field I know nothing about…

My question: have rational choice theorists ever discussed the choice to lead an ascetic life? I don’t know how this would be theorized, maybe as lack-of-pain-that-comes- from-the-denial-of-pleasure(?).

39

Mikhel 11.02.03 at 3:18 pm

One interesting thing to note is the diction in the original question:

[i]You are in hell and facing an eternity of torment, but the devil offers you a way out, which you can take once and only once at any time from now on. Today, if you ask him to. . . [/i]

I feel it hardly nit-pickish to note that the concept of “today” and “tomorrow” would with difficulty be applied to an eternity in torture.

The entire problem is based upon a passing of time which probably would not be consciously noticeable to an actor in eternity in hell.

40

Seth Edenbaum 11.02.03 at 4:22 pm

Walt Pohl says: “Zizka You’re missing the point. The devil would only offer this deal to a believer in rational choice theory. Anyone else might flip the coin and get out of hell.”
Brilliant!

I’ve never paid much attention to rational choice theory, but it seems odd that discussions based on it should appear on a site named for the ‘Crooked Timber’ of humanity. Do you think Donald Rumsfeld does not have access to as more information than any of us? How is it possible then that he is so deluded? How is it possible, within the context of ‘rational action’ for such delusion to exist? I assume, without irony, that he has the best interests for our country in mind, but he is terribly, absurdly, stupidly, wrong. [Cheney on the other hand is corruption itself]

Philosophy is to attempt to think clearly about the world. It is only worthwhile in reference to its subject.

Thanks Z. My only quibble is personal. At the most abstract level I’m not particularly interested in human welfare, but only in levels of distinction, at categories of thought and response; and that only for purposes of my own understanding and amusement. People may prefer rational actor theory, but I think it is unsubtle and simplistic. Like libertarianism it’s based on an association of isolated individuals, and as such is both philosophically and esthetically shallow, the opposite of the collective knowledge we call ‘history’ or ‘literature’ or the languages we call ‘French’ or ‘Italian.’ But in the last century collectivity as an ideal became so vulgarized that now we have this crap in response.
The more things change…

I’ll write about this on the blog but Daniel Mendelsohn has a wonderful piece in the NY Review, on Peter Singer’s book about his grandfather. He makes my arguments better than I can.
Z, you’ll understand what I mean.

I’m out.

41

Zizka 11.02.03 at 4:59 pm

Incidentally, Peter Singer is for me a prime example of the problem with analytic philosophy. As far as I can tell (I do NOT study the guy) his procedure was a.) pick a topic (animal rights) b.) produce competent, virtuoso arguments in favor of animal rights. A.) is the “collection”, b.) is the “analysis” (I forget what Socrates/Plato calls it.)

To me Singer did the “collection” very casually and badly; to me animal rights is not the place for an ethicist to start. (In the present historical environment, “rearranging the deck chairs on the Titanic” is too weak a metaphor). Furthermore, he seems to have assumed anti-pain utilitarianism right off at the beginning. From there, everything he says probably follows, but to me the most important job (collection, setting the problem) was very badly done. (It’s as if he wandered down to a coffee shop and found out what the kids were talking about).

42

ted's dad 11.02.03 at 7:19 pm

As long as I could listen to the dialogue of all the unfortunate souls with me in hell, I think I would stay there. I find the range of answers to this “opportunity in hell” to be intrigueing. I would not want to miss one person’s analysis.

43

Michael Stastny 11.02.03 at 8:54 pm

Economists will never be confronted with such an offer since they are so evil that hell is afraid they’ll take over. Being allowed to escape hell immediately is the sure event.

44

Neel Krishnaswami 11.02.03 at 9:10 pm

Kieran: Yes, I am suggesting that it’s not arbitrary to require that utility functions do not contain singularities. This is because a) you can add the requirement without changing the structure of the person’s preference relation, and b) some variant of this requirement shows up all over the place in analysis.

I might be missing your main point, though: I don’t really understand what difference you mean when you distinguish between between theoretical and practical rationality.

45

christian kraxner 11.03.03 at 3:10 am

You could also torture the devil. Would you like to toss 2 billion coins? So there is surely a time when he will let you go before doing that. Or will he cancel the deal? … To any given problem there are n possible non-mathematical solutions.

46

Ep 11.03.03 at 4:46 am

As I understand it, there is one ultimate and one penultimate being. They want us to behave in diametrically opposed ways, but the ultimate one will reward us for compliance to his wishes, whereas the penultimate one will punish us for compliance to his. Further, though it is solely the ultimate one who determines whether a soul is sent down, the penultimate one gladly accepts his decisions and relentlessly tortures every one of them, even though that is tantamount to slavishly doing the dirty work of his sworn enemy. On top of all this, the penultimate one makes the above mentioned coin toss offer. If this correct so far, I have a few questions:

1)Why won’t the ultimate one do his own dirty work?
2)Why does the penultimate one do it for him?
3)Why does the penultimate one punish compliance?4)Why would the ultimate one reverse his decision based on a random event?
5)Assuming the penultimate one honors his offer, but the ultimate one will have none of it, if the soul wins the coin toss, where does he go?
6)Who made up this fairy tale?
7)Why do so many adults believe it? (coin toss excepted?)

47

Ep 11.03.03 at 4:46 am

As I understand it, there is one ultimate and one penultimate being. They want us to behave in diametrically opposed ways, but the ultimate one will reward us for compliance to his wishes, whereas the penultimate one will punish us for compliance to his. Further, though it is solely the ultimate one who determines whether a soul is sent down, the penultimate one gladly accepts his decisions and relentlessly tortures every one of them, even though that is tantamount to slavishly doing the dirty work of his sworn enemy. On top of all this, the penultimate one makes the above mentioned coin toss offer. If this correct so far, I have a few questions:

1)Why won’t the ultimate one do his own dirty work?
2)Why does the penultimate one do it for him?
3)Why does the penultimate one punish compliance?4)Why would the ultimate one reverse his decision based on a random event?
5)Assuming the penultimate one honors his offer, but the ultimate one will have none of it, if the soul wins the coin toss, where does he go?
6)Who made up this fairy tale?
7)Why do so many adults believe it? (coin toss excepted?)

48

Matt McIrvin 11.03.03 at 5:18 am

The problem of how mathematical expectations blow apart when confronted with infinite utilities in a theological context was already touched on by one of the first people to study probability systematically: it is, of course, at the heart of Pascal’s wager. Of course, he described it as a simple binary choice between (his version of) the Christian God or nothing, and when you admit yet other possibilities it becomes pretty hard to analyze in any sensible way.

49

Kieran Healy 11.03.03 at 6:25 am

Neel -

I don’t really understand what difference you mean when you distinguish between between theoretical and practical rationality.

I meant the contrast between an ideal theory of rationality on the one hand, and the rationality within the grasp of agents like ourselves in circumstances like ourselves on the other.

it’s not arbitrary to require that utility functions do not contain singularities. This is because a) you can add the requirement without changing the structure of the person’s preference relation, and b) some variant of this requirement shows up all over the place in analysis.

Having gone back and looked at Van McGee’s paper, “An Airtight Dutch Book” (Analysis 59(4), 257-65), I find he has a very clear discussion of the point I was making. McGee knows that utilities can be bounded in the way you mean. But, he asks, while having a unbounded utility scale may be irrational “in the sense that having such a scale is incompatible with having a rational, comprehensive plan for making all your life’s choices, is it also stupid?”

McGee argues that it is not, and shows that there are cases where one would want an unbounded utility scale. They occur when “the benefits we hope to acquire by our actions are, or as far as we know might be, boundless.” The original example is just one such case. It has bite because we might want to decide what to do in the light of the possible existence of an afterlife:

Belief … comes in degrees. I do not believe there is an afterlife. I would go so far as to say that I am sure there is no afterlife. But am I certain with probability one that there is no afterlife? Surely not. If it were somehow assured to me that, for the price of one licorice jellybean I could guarantee that, if there is indeed an afterlife, my place in it would be one of boundless bliss, I would give up the bean. Even that very tiny degree of believe suffices to ensure that, when figuring utilities, the epistemically possible worlds in which there is an afterlife cannot be ignored, and that, in turn is enough to stretch my personal utility scale out to infinity.

The example is meant to illustrate a limit to the pure theory of rationality. Rational agents ought to be immune to dutch-books. Agents with an unbounded utility scale can be sucked into dutch-book bets. Therefore rationality requires agents to have a bounded utility scale. But this means that the class of cases that decision theory can give us guidance about is restricted. Thus, it’s not true to say that bounding the utility scale makes no difference. That’s what the example is designed to show.

50

dsquared 11.03.03 at 7:03 am

Neel: Non-satiation is one of the von Neumann/Morgenstern axioms, so I’d guess it’s probably quite important for deriving results in standard expected utility theory. Prof. Quiggin would know whether it’s possible to drop the axiom and still get a workable theory.

51

aapie 11.03.03 at 7:20 am

i toss right now, one day more in the Valley of the Thames is not possible…shit no head eih

52

Kieran Healy 11.03.03 at 10:51 am

Daniel -

McGee’s paper says on this point:

bq. The unhappy situation in which you are bound to sustain a net loss of utility can be avoided if your utility scale is bounded. This follows from the Lebesgue bounded convergence theorem (Halmos 1950: 11) …

So you can certainly get a workable theory with bounded utilities, and one that’s workable for a large class of decision problems. It’s even workable for the example at hand except that, as McGee argues with his jellybean story, it violates a desideratum of an ideal theory of rationality.

53

XR7 Power Drill 11.03.03 at 11:07 am

Without weaseling out of answering the intended question, it seems to me the rational thing to do is clear but complex.

Waiting infinitely long is obviously a stupid thing to do, as it’s what you’re so painstakingly avoiding.

Rather, you should wait finitely long. But to what finite number of days? Each day, statistically, is preferable to the previous one, weighed against infinity. So the number should be high. VERY high.

There are many infinities in this situation, but your personal abilities and variables do not appear to be among of them. So, logically, there is SOME NUMBER that is the highest finite number of days you could wait in hell without risking resorting to a strategy that could keep you in hell forever.

If a person’s mind was a turing machine, say, they could perform the busy beaver function (longest finitely running possible program), and opt for the coin flip after halting. Now, a person’s mind is much more complicated than, say, a twenty state turing machine. Indeed, they could actually simulate the behavior of one, assuming they can draw in the hypothetical dirt or whatever. But perhaps not a hundred state turing machine, or a thousand state one. The charts would start to become too complex, and at some number of states fatal errors would be made, even assuming they had a table of the operations laid out initially.

Anyway, I’m kind of going on a tangent here. A person’s mind is very very complicated, but it is finitely complicated. There is some turing machine which would surpass it even in raw computing power. Thus the person’s ability to pick a HIGH BUT FINITE NUMBER is, in fact, constrained. It is not constrained cleanly, however. A person can devise a variety of systems they can use for reaching a high but finite number, but as the systems get more complex, the possiblity creeps in that something will go wrong and the system will never reach an end, and the person will “stay in hell” for infinity.

Intuitively, it seems as though this is not so. Surely the person would EVENTUALLY come to suspect that the system was not working. This is true, but at THAT point what can they do? If they adopt a different but equally complex system, they run the risk of switching systems forever. Now, they CAN make sure always to chose a SIMPLER system than the one they are using that they have come to doubt the finality of, BUT- at a certain level of complexity of systems, it becomes impossible to JUDGE the comparitve complexity of systems. They would then have to either stick with eternally having faith their uncomparably high system, or switching to a system below the hazy ceiling of uncomparability. Since the first option has some finite chance of keeping them in hell forever, it must be wieghed against the finite chance of staying in hell forever they would get from the coinflip at the end of the highest comparable system. As the complexity of the uncomparable system gets higher, so does its chance of malfunction, and thereby, its chance of causing you to spend forever in hell. At some point this chance must exceed the chance given in the coinflip at the end of the highest comparable system.

At this point it becomes more than obvious that the system for PICKING systems is itself subject to possible errors. And as each system for picking systems adds another layer of complexity, eventually the complexity must be managed by a metasystem, blah, blah, Godel, anyone still reading this will have likely have considered such things years ago.

So, there is some largest number a person can reach without running the risk of eternal damnation. But is this really what we want? Perhaps an eternity accidently spent pursuing a neverending system would be qualitatively better than one spent without hope. Given THIS assumption, the strategy to be used involves a different weighing of the factors. How different? It is difficult to compute, because instead of landing among two infinities, one of heaven and one of hell, we have THREE options before us. Heaven, hell with hope, and hell without hope. I’m too lazy to figure this one out, so somebody else do it.

Something that may be in the mind of the reader is that it seems impossible someone could ACCIDENTLY spend eternity in hell in the manner described. That, on a given day during the infinitude of days spent pursuing an endless system, a person has SOME CHANCE, however small, of giving up and taking the coin flip, probability be damned. Or, if not overhauling their most important descision in a single day, perhaps there is SOME CHANCE that a sequence of thoughts would begin in the person’s mind the results of which would end with that action. Given eternity, it seems quantum uncertainty, if nothing else, would make this a possiblity however slight.

But wait, perhaps quantum uncertaintly and such things do not operate in hell, as they are not thematic. Hell is about suffering and eternity and little fires everywhere and demons poking people with sticks. Hell clearly has no SCIENTIFIC basis. But then, without science underlying human cognition, what is there? When we consider the mind as a computer as we have done here, we eventually run into the issue of free will, or the absence therof. In the end, when eternities are involved and things are pushed to their mathematical limits, we can no longer speak of what one SHOULD do but what one WILL do. A person in a situation such as this one is just a system ruled by the forces that operate it, and through the impenatrable march of physical logic they must reach their end, be it good or bad or a mixture of the two made of probability.

And now, back to the teletubbies!

54

Matt 11.03.03 at 12:56 pm

Er, you’re all assuming that Satan would use a fair coin.

55

Matthew 11.03.03 at 1:34 pm

Is this the situation the Tory Party thought it was in when it finally decided to ditch Iain Duncan Smith?

56

Michael Sigmund 11.03.03 at 1:52 pm

From a game theoretical view it’s easy to solve the problem after specifiying the discount factor and one’s utititly function.
The extensive form is clear.
That’s why I offered you this chance- hehe

57

dsquared 11.03.03 at 2:11 pm

Actually a discount rate helps much less than you’d think; the present value of an infinite series doesn’t change as you go into the future.

I have an intuition that this problem maps onto the Arrow Impossibility Theorem; if you divide yourself into an infinite number of future selves and imagine that you’re having a vote about the distribution of a particular bad among those agents, you’re not going to come up with a satisfactory decision procedure.

58

Erik 11.03.03 at 2:46 pm

hmmm, why wouldn’t the problem be easily tractable assuming that you know your discount factor and the utility discrepancy between spending a day in hell or not? It strikes me that the problem has the property (for standard utility functions) that if I prefer taking the coin toss on day T over day T+1, I will necessarily prefer it over T+2, etcetera. This reduces it to a rather straightforward decision-making problem. Dealing with the infinity problem only arises if you don’t discount at all. (In dealing with the devil, discounting the future heavily seems the prudent option).

59

Chris Bertram 11.03.03 at 3:29 pm

I’m far less mathematically gifted than many of the commenters to this thread, so I state the following warily!

Those who are applying a discount rate seem to moved by the following thought. That even an eternity in hell could be associated with a finite amount of expected (dis)utility. Measuring time on the x axis and pain on the y axis and discounting the future at some rate gives us a curve which asympototically approaches 0 and the area bounded by the curve and x axis is in principle finite and calculable.

But that assumes that you can motivate discounting ! Why would you reduce the expected disvalue of future experience in hell since, ex hypothesi, that pain is just as certain at time t+10000000 as it is at t (so long as you haven’t escaped by successfully taking the gamble)?

As for what I would do. I dunno, … wait 20 says then gamble? But I’m not a fully rational agent!

60

Neel Krishnaswami 11.03.03 at 4:20 pm

dsquared: non-satiation and bounded utilities are compatible. Consider a utility function of the form U(y) = 1 – 1/y where y ranges over the positive reals. We get non-satiation from the fact that if y’ > y, then U(y’) > U(y). However, U(y) is bounded from above by 1.

kieran: I’ll go look up McGee’s paper — it sounds fascinating.

However, the argument in the fragment you quote stops working when you try to extend it to multiple infinite alternatives. For example, suppose that you grant a nonzero probability of truth to both the possibilities that fundamentalist Christians and fundamentalist Muslims are correct. If the Christians are correct, then if you accept Jesus as your savior then you go to Heaven when you die, and go to Hell otherwise. Likewise, if you accept the pillars of Islam, then you go to Heaven, and otherwise you go to Hell.

Now, if we grant Heaven and Hell infinite positive and negative utilities, then regardless of the relative probabilities you assign to the one or the other being correct, a rational agent can’t decide between the two. Even if the agent believes there’s a 99% chance the Christianity is right and a 1% chance the Islam is right, he can’t mathematically choose to be a Christian! This is clearly incompatible with what we think of as rationality.

61

Brian Weatherson 11.03.03 at 4:46 pm

These puzzles are fun, and I may post some more for people’s amusement later. But let me just add one more criticism of the use of discounting to solve the puzzle. If future discounting is to work, it must make it the case that the net disutility of being in hell forever is finite. Otherwise Daniel’s point about the discount value of an infinite series being infinite becomes overwhelming. But if that’s to be the case, then there must be some day n such that your present disutility for being in hell from day n to eternity is greater than your disutility for spending the next minute in hell. (I hope I’ve got the maths right here – otherwise this will be embarrassing.) And that’s simply absurd. I know it’s all well and good to prefer future pain to present pain, but to prefer an infinite amount of future pain to a minute of present pain is frankly absurd.

Note that if we suppose that all rational agents have bounded utlities, as has been suggested, it becomes a requirement of rationality that such an n exists.

(I’m sort of assuming here that the disutility of being in hell over an interval is the sum of the disutility can be calculated by partitioning the interval into countably many subintervals and summing the disutility of being in hell over each of those times. If that’s not being supposed maybe some of these results can be averted.)

62

Erik 11.03.03 at 6:23 pm

I am not sure why the idea that agents have bounded utility is a stronger requirement on rationality than that agents understand infinity. Otherwise, it involves answering an age-old question: Are all infinities the same, or are some more infinite than others? Perhaps, Cantor has something to say about that. (http://www.newyorker.com/critics/books/ (TO INFINITY AND BEYOND))

63

af 11.03.03 at 6:57 pm

Too late. I suspect that we (or me, at least) are/is already there. However, given the problem as stated, I would just hold off the decision indefinitely, since it says one is facing the prospect of torture, not that torture has already begun. Losing the coin flip is the trigger mechanism for torture to begin, so by delaying the decision, one may evade torture for an infinitely long time.

64

PeterV 11.03.03 at 8:47 pm

Day 5. Hope will make the torture manageable for the first days, but anything below a 1% increase will not be emotionally impactful enough to suffer another day of torture for.

65

God's Advocate 11.03.03 at 9:01 pm

Considering that making deals with the devil is what got you to hell in the first place, it would seem that abandoning purely man-made rational systems of analytics might be in order, and a little bit of repentance at hand. If the critical effect of your situation and its revelatory inspirations lead you to this morally rational conclusion, then you should abandon hope in the devil and pray for God’s deliverance.

In short, the answer is to ignore the devil and flip your own coin.

If you want to ponder the futile mathematics of this further, consider refactoring your question to include the velocity of infinite suffering in any given instant, and its effects on the virtues of waiting.

Amen.

66

Stewart Adams 11.03.03 at 9:29 pm

christian kraxner, I found your comments very interesting.. but only at first. Now, keep in mind that I’m not arguing either for or against, simply as a neutral party… from your comments:

“1)Why won?t the ultimate one do his own dirty work?
2)Why does the penultimate one do it for him?
3)Why does the penultimate one punish compliance?
4)Why would the ultimate one reverse his decision based on a random event?
5)Assuming the penultimate one honors his offer, but the ultimate one will have none of it, if the soul wins the coin toss, where does he go?
6)Who made up this fairy tale?
7)Why do so many adults believe it? (coin toss excepted?)”

The answer to a believer is quite simple.

Because Satan hates all of mankind, and wishes to tormen them all. It is only the few who are ‘protected’ by God which move on to Heaven.

The basic belief is that Satan is an elitist, and became insanely jealous that God loved Man before even his own angels.

Satan wishes to destroy/torment mankind… but he cannot do so overtly while man is still in his mortal state. To do so would only cement Man’s belief and love in God, and ensure Man eternity in bliss.

It’s not that Satan is doing God’s dirty work… he’s simply invented a system to allow himself to do as much of his own “evil” work as possible.

(Again, just to reinforce the statement, I’m not arguing for or against any belief structure here. Personally I don’t even believe in the traditional ideas of “good” and “evil”, so “God vs. Satan” can come across as comical quite often).

67

dsquared 11.03.03 at 10:09 pm

Otherwise Daniel’s point about the discount value of an infinite series being infinite becomes overwhelming.

Bit of a typo there, BW … the present value of an infinite series is equal to 1/discount rate, which is finite. The point is that, unlike a finite series of payments, it doesn’t reduce over time. There are actually UK government bonds (War Loan Consolidation Stock) which are meant to pay a set amount every quarter forever and never redeem principal, which is how I know this.

(Speaking of which, I’m working on a few comments on your Economica paper on Keynes & probability …)

68

Nadalie 11.04.03 at 12:29 am

It seems to me that an infinity in hell, knowing I could probably escape at any time, would be somewhat easier to bear than that same infinity with the knowledge that I just hadn’t waited long enough. Of course, to other people it might be the reverse. Very interesting problem!

(Please pardon me if this has been said before — all the coments were so interesting, I tried to read them all, but I did skim quite a bit.)

69

Hagfish 11.04.03 at 1:10 am

Hmm. I don’t know about this eternal torment lark. So it would probably have to be today, tomorrow, or not at all. Probably depending on whether the torment stops between today and tomorrow when the devil offers me my second chance.

70

matt reading @ crescat sententia 11.04.03 at 1:27 am

Considering *all* the conditions, you probably accept the offer immediately.

71

Jeremy Osner 11.04.03 at 3:15 am

Very nicely stated, Stewart — I have never been able to verbalize the nature of that arrangement properly, even while sort-of understanding it at a gut level.

72

Anarch 11.04.03 at 6:35 am

d^2: the present value of an infinite series is equal to 1/discount rate, which is finite.

This has been irking me for a while, but aren’t you and Brian implicitly assuming a constant discount rate, i.e. that the resultant series is geometric? I’m not at all convinced that model’s applicable here.

[Also, I don't think what you wrote -- "the present value of an infinite series" -- is exactly what you meant, unless that's a kind of inaccurate shorthand employed by economists. "The present value of an infinite sequence of payments" would be more accurate, I think.]

Incidentally, you can twist this problem on its head to get some idea of the magnitude of infinity (in re my earlier comment): No matter how long you’ve waited, no matter how great the torment you’ve already suffered, no matter how near-certain your chances of winning… the disutility incurred by losing is so unbearably, infinitely great that you simply cannot afford to take the bet today. An intuitive rephrasing of the proof by induction, in a sense.

Note: The previous remark requires that the amount of pain experienced on any day be finite, but the utility of leaving hell is infinite. One could presumably rework this example to utilize different cardinalities, but why?

73

Brent 11.04.03 at 7:06 am

This is easy . . .

After a certain number of flips (say, a million or a billion, or maybe a trillion), assuming that they all come up tails so that the devil has to keep flipping-flipping-flipping, the devil’s preferred flipping thumb will be worn to a bloody stump so that he can’t flip with it any more.

He’ll have to switch to the other thumb. Another trillion flips (+/-) and so long to that thumb, too–worn right off.

He’ll switch to another appendage.

After another so many flips (a quadrillion, quintillion, septillion, octillion? Who knows how well the devil is actually constructed . . . ) all of the devil’s various flipping appendages (fingers, toes, elbows, nose, tongue, ears, etc.) will be worn off and the devil won’t have ANY way to flip the coin any longer.

In fact, at this point the devil will have been worn down to a small but very bloody nubbins which will be quite incapable of doing anything, from flipping a coin to stoking the fires of Hell.

This is the number of flips you wait for, plus one.

QED.

At this point in the flipping contest, assuming you had the extreme misfortune to actually reach it, you would, of course, take a moment to inscribe an extremely clever new proof of Godel’s Incompleteness Theorem (which, among MANY other similarly useful things, you have taken the time to work out whilst the devil was doing all that flipping) on the bloody nubbins of the devil, using the Occam’s razor-sharp edge of any one of the worn-out coins which are littering the landscape of Hell as far as the eye can see. Then you would drop the worn out and now doubly-useless nubbins of the devil into the nearest trash receptacle and walk away.

You see, there’s more than one way to skin Schroedinger’s Cat . . .

–Brent
bhugh@mwsc.edu

[Just in case you’re wondering what my *serious* point is, or if I even have one, it’s this: we think of infinity as being a really big number like a hundred or a thousand only a little bit bigger.

Of course it’s nothing like that at all.

Then we make a little story throwing around a lot of infinities of various sorts, and stand around all agape because it all seems so “paradoxical”–that is, contrary to our ordinary everyday experience.

Also, there are lots of different ways to think about infinity, so in ordinary “philosophical” discourse it is easy to switch back and forth between all of them without bothering to inform the reader, and thus confuse the issue mightily while still convincing yourself that you “know” exactly what you’re talking about.

Inject even the slightest bit of reality back into the and the paradox resolves, usually very easily. For example, the St. Petersburg Paradox (which is very, very similar to the “Flipping Devil” paradox mentioned here) seems like a real knuckle-buster until you inject a little reality by assuming that, instead of the postulated INFINITE potential payoff–infinite meaning, in this context, that the payoff has absolutely no upper bound and can be indefinitely large–you assume some reasonable amount as the highest possible payoff.

For instance, assume the maximum possible payoff is the total monetary value of the entire universe. This is a large but finite number. Work through the St. Petersburg Paradox with this upper bound and you can solve it in 5 minutes.

If you don’t like this relatively measly upper bound for your payoff, how about taking as your maximum the largest number that could be represented on a computer, assuming you have a maximal efficiency quantum computer made up of all the atoms in the entire universe.

Again, with this condition you can “solve” the St. Petersburg Paradox in about five minutes.

And as large as they are, neither the monetary value of the entire universe nor the largest number that could be represented on the Entire Universe Quantum Computer is even CLOSE to infinity.

See this web site for a discussion of the St. Petersburg Paradox:

http://plato.stanford.edu/entries/paradox-stpetersburg/

Though, note that this very page is shot through with just the kind of fuzzy thinking about infinities that I have just been discussing. For instance, the author ridicules the idea that “people [could] have a finite number of desires”. Well, the alternative is that [some] people have an infinite number of desires.

Considering that our brains themselves are finite, how would a person even think about this “infinite number of desires”? And since the earth, and even the universe itself, is finite, what/where exactly is even the potential fulfillment of all these desires going to come from?

A googol or even a googolplex isn’t a bit closer to *infinity* than 1000 is . . . ]

74

dsquared 11.04.03 at 7:09 am

Anarch is exactly right in his correction of my sloppy sentence.

75

taoistmage 11.04.03 at 7:50 am

Math is all well and good for solving puzzles,
but given this puzzle involves certain metaphysical
intangibles, it might be a good idea to approach it
from an Epistemological basis.

Might I point out – the basic premises are: The coin is fair.
The devil’s offer is genuine. You can choose a method which can (how the odds
add up aside) eventually get you out of Hell (and without coercion in either case – your not forced to flip or not flip the coin).

It is generally accepted that the devil is a deciever.
God is the only one that can damn you to Hell.

From those givens and the general deceptive practices the devil is known for,
we can reason that:

1. There is, in fact, a way out of Hell.
And
2. The devil is not the one keeping you there.
But
3. The devil is offering a wager which could keep you there for an
indefinately long period of time.

Ergo – you should just walk out immediately
without subjecting yourself to any torture at all.
Reject the entire proposition as a fallacy of distraction
(giving you two choices when there is a third)

Or – for you more mystically minded
a subset of the definist fallacy:
a metaphysical fallacy.

In this case the devil attempts to identify
goodness (escaping Hell) with a metaphysical property – the will of God
(as embodied in the coin – more absurdism – given of course the infered
judeo-christian dichotomic structure here – theological arguements notwithstanding).

One could make the case that the poser of the question can be
identified as the devil since the question poser
seeks to trap one’s mind into solving the puzzle which is
simply an attempt to trap one in Hell attempting to solve the puzzle.

But metathinking can occasionally be misconstrued through accidents in
language as another fallacy: avoiding the question (or at least the
honest implication in the question).

Similar to saying: “I can tell you the score of the baseball game tomorrow
before the game is played.”

Betting ensues…

And you say: “Zero to zero.”

But in this case the problem with gambling with the devil is the conclusion that it is necessary to apply math to the problem
is derived from premises that presuppose the conclusion.

Normally, the point of good reasoning is to start out at one place and end up somewhere new,
namely having reached the goal of increasing the degree of reasonable belief in the conclusion.
The point is to make progress, a definite goal, but in cases of begging the question there is no progress.

Many people intuit this and make innocuous comments as a result.

Ergo – perform the Heidigger spring (Ursprung): bail on the devil and tell him to stick his
coin back up his infinite ass where it came from.

76

Jeremy Osner` 11.04.03 at 4:11 pm

Here is another way of understanding the infinite-ness of torment in hell. Each day* that passes increases the amount of torment that you have suffered in the course of your damnation, but the amount of torment that you can expect to suffer in the future remains constant.

*I take issue with the poster above who suggested that “day” is meaningless in hell because it is an eternal condition, so time does not pass — the never-ending passage of time is an integral part of my own vision of damnation. Otehrwise what sense does it make to say, “abandon hope, all who enter here”? — if time is not passing then hope is not at issue.

77

chrismn 11.04.03 at 5:25 pm

This one is not that hard. Let x be the payoff to a day in hell and y the payoff to a day not in hell and beta the discount parameter, so that forever in hell gives the payoff x/(1-beta). I assume x and y are finite or else this makes no sense. For some probability of staying in hell forever, p, one will be indifferent between taking the flip or waiting until tomorrow. For p greater than that, one will strictly prefer to wait, for p less, one will strictly prefer to flip. This implies p x/(1-beta) + (1-p) y/(1-beta) = x + beta (p/2 x/(1-beta) + (1-p/2) y/(1-beta))
This solves for p = 2(1-beta)/(2-beta).
Somewhat interesting is that the actual values of x and y drop out. As patience increases, one is willing to wait longer, but never forever.

78

Jeremy Osner` 11.04.03 at 5:57 pm

Chris’s comment raises an interesting point — I had been assuming before that damnation is an infinite number of moments of extreme torment but that the torment itself is quantifiable — but it makes much more sense (insofar as such things make sense) to posit that each moment of pain is itself infinite, that the pain (or “disutility” as some here would have it:-) is not quantifiable.

79

Meursalt 11.04.03 at 8:13 pm

I’ve heard a similiar story – only this deal from the devil WASN’T if ONE head (or tails) – the person (you) has to chose EACH TIME on whether it’s heads or tails.

wait a week? you’ll get 7 chances…

7 independent chances.

Thus – no matter how long you wait – you’re still faced with a 50/50 chance.

80

meursalt 11.04.03 at 9:13 pm

A revision to my earlier comment.

Everyday the “devil” is going to come and entice you with a better offer the “next day” and next thing you know, you’re in hell forever.

Do it now. Do it today.

There’s always a reason to do it tomorrow.

81

pw 11.04.03 at 9:34 pm

For those arguing about things like discount rates for future suffering, this problem is pretty much the same as the Supernova Problem for currency values. In this neck of the woods it should be called the Red Giant Problem, but what the heck.

If you knew with certainty that the dollars you were about to receive for doing some job would be worth half their value tomorrow, you’d want twice as many of them as you would if you knew they would be worth the same tomorrow as today. Simple stuff: current value of monetary instruments depends on the expected future value of the money.

Well, some day a few billion years from now, all the dollars on earth will be worth exactly nothing, because they (along with mountains and oceans and everything else but a small refractory core) will be boiled away by the aging sun. The day before that happens, dollars will also be worthless because they’re about to be vaporized, so who wants them. The day before that… and so on, with the result that no possible discount rate can explain the fact that dollars and yen and ecu are worth something today.

Of course they are, and for good reason, but you won’t find it in the simplified math.

82

chrismn 11.05.03 at 4:03 am

The choice here is over a menu of lotteries over infinite lives. The first item on the menu is a 50/50 chance of receiving (Hell,Hell,Hell,…) and (Heaven, Heaven, Heaven, …). This is the lottery you get if you flip the first day. The next item is a 25/75 lottery over the sequences (Hell,Hell,…) and (Hell,Heaven,Heaven,…). This is the lottery if you flip the second day. The third is a 12.5/87.5 lotter over (Hell,Hell,…) and (Hell,Hell,Heaven,…) and so on. There are a countably infinite number of such choices.

I wrote earlier that if people have a constant rate of discount beta and heaven and hell give finite per day payoffs, then there exists a probability p = 2(1-beta)/(2-beta) such that you are indifferent between flipping or not. For probabilities greater than p you wait, less than you flip.

What does it mean to have a constant rate of discount and finite per-period payoffs of heaven and hell? Basically, rationality only imposes that people can order the lotteries above in a transitive way. If I prefer lottery A to lottery B and lottery B to lottery C, then I prefer lottery A to C. One way to do this is to
find a way to map every certain sequence into a number and then have preferences over lotteries determined by the expected value of this lottery.

This isn’t the only way to do it, but you can run into real problems otherwise. Suppose for instance that if lottery A has a lower probability than lottery B of staying in Hell forever, then I prefer lottery A. These preferences are transitive. Then which lottery should I choose. Why the best one of course!
That is the basis of the theory of rational choice. If faced with a menu of choices, choose the best one. If there are a finite number of choices and you can rank them, (you don’t need to put numbers on them), then there is always at least one best one. (There could be ties, in which case there could be more than one best choice). But if there are an infinite number of items on the menu, it is possible there is no best choice.

Suppose I am facing a choice of choosing whatever counting number I want and I like higher numbers better than lower numbers. Then there is no best choice. That is the same problem here. If one wants to minimize the probability of spending forever in hell, there is no best choice. But if our model of choice is we choose the best thing on the menu, and there is no best choice, then the problem is simply ill defined.

83

Paul L 11.05.03 at 4:32 pm

This might have been alluded to in comments of ‘disutility’). If the torture/torment is infinitely experienced before the devil comes back the second day- the added ‘benefit’ of waiting another day is moot- you have already experienced eternal torment. It seems then that the devil’s trick is to negate an increasing finite probability (waiting days) with infinity (eternal torment).

84

Bill 11.06.03 at 4:16 am

When you have “infinitely bad” prospects in the mix, utility theory breaks down. In other words, there is no such thing as “infinite utility”; it is (literally) a contradiction in terms.

Proof:

Let’s look at the following prospects:

A – win $1000
B – win $100
C – “infinitely bad prospect” like eternity in Hell.

One of the axioms of utility theory is something called “continuity” : it means that if A>B>C (true in my example) then there is some probability “p” between zero and 1 where you are indifferent between

- getting B for sure and
– getting a lottery where you have a p chance at A and a (1-p) chance at C.

In my example, if p=1, then you prefer the lottery. If p

85

Bill 11.06.03 at 8:48 am

When you have “infinitely bad” prospects in the mix, utility theory breaks down. In other words, there is no such thing as “infinite utility”; it is (literally) a contradiction in terms.

Proof:

Let’s look at the following prospects:

A – win $1000
B – win $100
C – “infinitely bad prospect” like eternity in Hell.

One of the axioms of utility theory is something called “continuity” : it means that if A>B>C (true in my example) then there is some probability “p” between zero and 1 where you are indifferent between

- getting B for sure and
– getting a lottery where you have a p chance at A and a (1-p) chance at C.

In my example, if p=1, then you prefer the lottery. If p>1, then you prefer B for sure. You can see the discontinuous “jump”: there is no p where you are indifferent.

This may seem trivial, but it is in violation of an axiom of utility theory. Therefore, you shouldn’t use the phrase “infinite utility”; to define utility, you can’t have discontinuities like that.

86

Bill 11.06.03 at 8:49 am

(sorry for multiples; the symbol for “less than” messes up HTML real good :-)

When you have “infinitely bad” prospects in the mix, utility theory breaks down. In other words, there is no such thing as “infinite utility”; it is (literally) a contradiction in terms.

Proof:

Let’s look at the following prospects:

A – win $1000
B – win $100
C – “infinitely bad prospect” like eternity in Hell.

One of the axioms of utility theory is something called “continuity” : it means that if A>B>C (true in my example) then there is some probability “p” between zero and 1 where you are indifferent between

- getting B for sure and
– getting a lottery where you have a p chance at A and a (1-p) chance at C.

In my example, if p=1, then you prefer the lottery. If p is less than 1, then you prefer B for sure. You can see the discontinuous “jump”: there is no p where you are indifferent.

This may seem trivial, but it is in violation of an axiom of utility theory. Therefore, you shouldn’t use the phrase “infinite utility”; to define utility, you can’t have discontinuities like that.

87

chrismn 11.06.03 at 12:13 pm

Bill writes:

Let’s look at the following prospects:

A – win $1000
B – win $100
C – “infinitely bad prospect” like eternity in Hell.

One of the axioms of utility theory is something called “continuity” : it means that if A>B>C (true in my example) then there is some probability “p” between zero and 1 where you are indifferent between – getting B for sure and
– getting a lottery where you have a p chance at A and a (1-p) chance at C.

In my example, if p=1, then you prefer the lottery. If p is less than 1, then you prefer B for sure. You can see the discontinuous jump there is no p where you are indifferent.

(End of quote).

Bill is right that these preferences violate the utility theory axioms. On the other hand, it’s the utility axioms which are reasonable (IMHO) not this example. One way to put this is that if you are willing to accept no probability of C, no matter how small, in order to get an extra $900, then that is a different way of saying that all that really matters to you is not getting C. If you are indifferent between A and B and prefer both to C, then all the axioms are back and these preferences can be represented by a utility function U(A) = U(B) = 1, U( C) = 0. Then lotteries are ordered by their probability of delivering not C.

In my example with no discounting above, you are indifferent between any sequences which eventually get you to heaven and prefer these to the sequence which has you in hell forever. This again can be represented by a U(eventually get to heaven) = 1, U(always stay in hell) = 0 utility function.

The problem with this example of preferences is not finiteness of payoffs (or not), it’s that there are an infinite number of choices and no best choice.

88

bill 11.07.03 at 2:12 am

“In my example with no discounting above, you are indifferent between any sequences which eventually get you to heaven and prefer these to the sequence which has you in hell forever. This again can be represented by a U(eventually get to heaven) = 1, U(always stay in hell) = 0 utility function.”

I think I’ve read all the comments carefully; I don’t see the argument that this utility function is correct.

If the devil says “I give you two choices: Hell for a day, then Heaven forever; or Hell for two days, then Heaven forever,” I don’t see why I’m necessarily indifferent.

89

chrismn 11.07.03 at 3:38 am

Bill,

I wasn’t clear. Your A, B, C preferences, where A is preferred to B and B to C, but C is so bad that the person will not accept any probability of C, no matter how small, in order to get an extra $900, are perfectly valid preferences. They are transitive and so forth. They do violate one of the expected utility axioms as you point out. In fact, they are what are called “lexicographic preferences,” as in a lexicon (or dictionary). All that matters in alphabetical ordering is the first letter, unless there is a tie, and then the second letter matters and so on. I was arguing that in my humble opinion, we shouldn’t like lexigraphic preferences. This is a well know example which violates the axiom you point out. I was simply arguing that to me, they don’t ring true.

Again, my argument is that if you aren’t willing to accept any probability of C, no matter how small for an extra $900, then what you “really” care about is avoiding C. But if all you care about is avoiding C, then you are basically indifferent between A and B, and then the whole expected utility toolbox can be used again. But this is a taste issue. Nothing you said is wrong.

90

chrismn 11.07.03 at 3:45 am

I should have added –

There is also nothing wrong with assuming that one is indifferent between all paths that eventually get to heaven. There is also nothing wrong with assuming that among paths which eventually get you to heaven, you prefer those which do it sooner. If heaven is infinite bliss, who’s to say it doesn’t make insignificant any finite length in hell?

91

ads 11.12.03 at 3:15 am

The Devil, being a known liar, has hidden something from his victim. The only thing that the unfortunate soul burning in Hell can get from that offer is — hope. So, to get the only possible benefit from the situation, the doomed soul must forever wait until tomorrow before taking the coin toss.

92

Michael Hoke 12.03.03 at 1:01 am

Whoa, what a mess of confusion! Lots of people with conflicting assumptions arguing over whose conclusion is least wrong! Lots of people dodging the question through casual theology! Lots of confused thinking about what a utility function represents! Lets try to sort through this mess.

We start by assuming that the problem was stated fairly; that is (assumption 1): the coin is fair, Satan will stick to the bargain, there is no small print to the deal, time spent in hell could be an infinite sequence of days, etc.

Chrismn almost described the problem correctly as a choice over lotteries, and dsquared almost pointed out the problem with that formulation (in his comments about infinite selves). The fact is that on any given day there are only two alternatives to choose from: flip now or wait. The ranking of those two options may change as time passes (which, evidently, is measured in days even in Hell), but presumably one has no way to commit oneself today to flip the coin a week from now, so how I feel today about the coin flip I’ll face next week is irrelevant a week from now (assumption 2: no commitment mechanisms are available). Then the answer is simple: a rational person will choose to flip the coin when he prefers the lottery the flip represents to not flipping on that day (question: how does he evaluate how he feels about either option? Who cares? He has to evaluate them to make a “rational” decision, and I am unwilling to stipulate a method just yet). Because the lotteries faced each day change, and because the person deciding may feel differently from day to day (hey, he spent another day in Hell – he can change his mind about how he feels about it), minimal definitions of rational choice place no restriction whatever on when the coin might be flipped. It might be the first day, it might be the seventeenth, it might be the googolth (sp?), it might be never.

Oh, that’s crazy, you say. Naturally, any sensible person will have preferences that may be represented by a time-separable expected utility function where the arguments are days in heaven and hell, or something similarly banal, you say. Surely, this is a paradox, you say. Fine. State the assumptions: days already spent in Hell have no bearing on the decision except insofar as they determine the number of flips (assumption 3: no information gain), the relative merits of each option (flip today or wait) may be evaluated by comparing some real-valued function of the odds of winning to some real-valued function of the number of days that have passed (assumption 4: trivial in the two-option world), and these functions are the same every day (assumption 5: time consistent preferences, a very strong assumption). A good deal more hocus-pocus and we get that the preferences may be represented by some nice time-separable (subjective?) expected-utility function (assumption 6 involves strong restrictions on the shapes of these functions, such that each option may be represented as a complex lottery over simple lotteries over states, such as being in heaven or hell for a single day, as well as a whole mess of stuff like the vN-M axioms or Savage’s postulates). Place restrictions on the relative values of a day in Heaven and Hell (assumption 7) and you can solve the problem definitively, up to a discount factor, or a patience parameter, or whatever. Whew.

See, the problem is that we’re collectively indecisive about what we require of a rational decision maker, and we’re indecisive about what we think of the torments of Hell, and we forget how restrictive the various utility “theories” are (BTW, they’re supposed to be restrictive – that’s how they allow us to solve ever more complicated problems: by focusing our attention on fewer options). Fact is, this group has disagreed on the merits of every assumption I made. Who’s surprised that there’s no single answer to which we’d all agree?

93

zombiefreak 02.29.04 at 12:22 am

Hey I’m just your average joe no math genius etc but the first day is the only day that you can take the bet and have a chance of winning based on the wording this is the only day it makes sense to take the deal
, the devil will toss a fair coin once and if it comes up heads you are free (but if tails then you face eternal torment with no possibility of reprieve). If you wait consecutive days it does not increase the odds based on the consecutive tosses all tosses after the first toss are irrelevant if its tails you suffer eternal damnation if its heads your free 1 heads your free 1 tails your screwed wait as many days as you want your fate is still determined by the first toss. Read the fine print when dealing with the devil or you may get burned Lol Thanks Just my Opinion Zombiefreak

94

Elliot Reed 03.01.04 at 5:42 am

I think the problem involves a conceptual mistake. Stepping back from the framework of utility theory, the only sense I can make of the idea of something being infinitely good is to see it as something I would prefer any positive chance of to anything else, and similarly with infinite badness. Put this way, there can’t be an infinite good and an infinite bad for the same reason there can’t be both an immovable object and an irresistable force.

Comments on this entry are closed.