The point of paradox

by John Q on May 26, 2004

Suppose you have encountered Zeno’s Achilles paradoxfor the first time. Zeno offers a rigorous (looking) proof that, having once given the tortoise a head start, Achilles can never overtake it. Would you regard this as[1]

# A startling new discovery in athletics;
# A demonstration of the transcendent capacity of the human spirit – although the laws of logic forbid it, Achilles does in fact catch and overtake the tortoise; or
# A warning about how not to take limits?

In this case, I assume nearly all readers will go for option 3. But things aren’t always so easy. The Einstein-Podolsky-Rosen Paradox was supposed to be a type-3 paradox demonstrating the incompleteness of quantum mechanics. But on most modern views, it is really a type-1 paradox, predicting various highly counter-intuitive consequences of quantum mechanics that nonetheless turn out to be be empirically valid.

Although I don’t accept that there are any good examples of type-2 paradoxes, plenty of others would offer this solution in relation to both Godel’s theorem and Schrodinger’s cat.

With these three possibilities in mind, how should we think about paradoxes involving probability measures over infinite sets that are finitely, but not countably additive? The two-envelopes problem we’ve been discussing here falls into this class and so, with a little bit of tweaking, do St Petersburg and related paradoxes. I’ll leave this question hanging and offer my own answer in a later post.

fn1. This is an expanded version of a point made by my friend and occasional co-author, Peter Wakker.

{ 37 comments }

1

Dr. Weevil 05.26.04 at 11:53 am

Is the numbering of your choices as “1”, “1”, and “1” some kind of metaphilosophical joke? You can no more arrive at “3” or even “2” than Achilles can ever catch up with the tortoise.

2

Matt 05.26.04 at 12:07 pm

Actually, I remember my initial reaction to it– “Plainly wrong”.

3

Brian Weatherson 05.26.04 at 12:09 pm

Without wanting to sound too much like a broken record on this, the use of finite additivity in the statement of the 2 envelope problem is entirely avoidable. If the amount in the envelopes is determined by a St Petersburg-style process, then the paradox can arise even though the relevant probability functions are all countably additive. The paradox is harder to state this way, but the essential point that (as John Broome showed) it’s possible to come up with a way of putting money in the envelopes such that for all x, the conditional value of the second envelope conditional on the first envelope containing x is greater than x.

That’s not to say there’s nothing problematic in these cases either, as the earlier discussions showed. For one thing they all assume unbounded utility functions. But they can’t be resisted by just insisting on countable additivity.

4

John Quiggin 05.26.04 at 12:26 pm

dr weevil, the numbering worked fine in the preview. Textile is fun, but tricky. I think it’s fixed now.

Brian, my reference to tweaks was a claim that paradoxes based on unbounded utility and countably infinite sets can be restated in terms of a failure of countable additivity (and, as in Broome, vice versa). I’ll think a bit more about whether I can justify this,

5

Andrew Chen 05.26.04 at 12:35 pm

An apropos example from physics would be renormalization in quantum electrodynamics, where infinities are canceled out in a way that is mathematically forbidden, but works anyway.

6

Keith 05.26.04 at 2:12 pm

The problem with type 2 is that it relies on that slippery of all phenomenon, faith (in the human spirit, in this case) and is thus, unverifiable. 1 and 3 are only moderately less thorny.

And personally, I think Schrodinger’s Cat will turn out to be a type 1, at some point in the near future.

7

zeno 05.26.04 at 3:34 pm

4

8

Bill Carone 05.26.04 at 3:34 pm

“Without wanting to sound too much like a broken record on this …”

Ditto. Once more with feeling …

“it’s possible to come up with a way of putting money in the envelopes such that for all x, the conditional value of the second envelope conditional on the first envelope containing x is greater than x.”

No it isn’t. For your St. Pete’s Envelope example, model it as N flips of the coin for each envelope, let N increase without bound, and you will see that it is false, right?.

Can you come up with an well-posed example where, when limits are taken correctly, gives a paradox?

“But they can’t be resisted by just insisting on countable additivity.”

“Countable additivity” is exactly the same as “take the limits instead of plugging in infinity.”

“For one thing they all assume unbounded utility functions.”

This is true, and it is the essence of the St. Petersburg paradox; even if you use limits, you will find that, for any amount of money, I can create a finite St. Pete’s paradox that you should want to buy for that amount of money. Since people would usually refuse to pay $100 for any size, this seems paradoxical. It is interesting to start listing the, say, 1000 prizes you would have to be offered in the size-1000 St. Pete’s paradox to be willing to pay $1000 for it. Most people start running out of ideas fairly soon.

This is interesting enough on its own, without all the silly infinity paradoxes people attribute to it.

9

Matt Weiner 05.26.04 at 5:55 pm

But the upper bounds on our utility functions are a contingent feature of our psychology and natural world. Not very contingent, I admit, but contingent enough that you’re not entitled to cite them as a way of getting out of a mathematical problem. That’s why they’re sometimes phrased in terms of time spent in heaven or hell–that can go on without end.
Also, not to sound like a broken record, but if you’re going to use the same N in both limits you have to explain why. That limiting procedure would yield different results if you modelled it as N flips for one envelope and N+1 for the other, or vice versa.

10

Bill Carone 05.26.04 at 7:32 pm

“Also, not to sound like a broken record, but if you’re going to use the same N in both limits you have to explain why.”

I thought I had; sorry.

You are right; if we modelled it as two deals with M and N flips, then the way we take the limit will change the answer we get. That isn’t paradoxical, it means the problem is ill-posed: two models, both consistent with the problem statement, give different answers.

In order to introduce two infinities, you often need to specify the limiting procedure used to create them (e.g. Keep M/N = 1? Keep M/N = 2? M-N=1? etc.) This is one common problem with working directly on infinite sets; note that texts often bend over backwards in their definitions to make sure issues like this don’t come up.

However, there is an “out” in this problem; the paradox relies on the fact that these two deals, call them A and B, are exactly the same except that the respective possible payouts of B are one less than those of A (not actual payouts, possible payouts, as you pointed out in the last thread). Destroy this “sameness” and you destroy the paradox (either B stops dominating A or there are possible values of A preferable to B).

So, because this “sameness” is implicit in the problem, it forces us to use the same N for each deal. I therefore think the problem is well-posed, and I think I get an intuitive, non-paradoxical answer:

– one should prefer B (the “smaller” envelope) to A no matter how large N gets,

– the actual values of both diverge as N increases without bound, and

– no matter how large N gets, if you look in A and see any x greater than N, you should switch to B.

None of these is paradoxical.

“But the upper bounds on our utility functions …”

I think I agree with you here; perhaps you aren’t addressing me.

I was just saying that you can raise these issues without putting nasty, icky, paradoxical infinities in there :-) I have found that it cleans up the discussion a bit.

11

armando 05.26.04 at 8:14 pm

Nasty, icky, paradoxical infinities? Aren’t you dismissing a great deal of mathematics and raising slightly old fashioned concerns? Not that there *aren’t* paradoxes, or that infinities don’t get icky, but you don’t want to throw the baby out with the bathwater.

12

hofstadter 05.26.04 at 8:24 pm

First, read my book.

13

John Quiggin 05.26.04 at 9:56 pm

“But the upper bounds on our utility functions are a contingent feature of our psychology and natural world. Not very contingent, I admit, but contingent enough that you’re not entitled to cite them as a way of getting out of a mathematical problem. That’s why they’re sometimes phrased in terms of time spent in heaven or hell—that can go on without end. ”

To foreshadow my next post on this, I don’t think there’s anything contingent about the fact that we are finite beings with a finite capacity to consume (earthly) goods and bads. So, for the purposes of decision theory, I don’t think there’s anything problematic about rejecting unbounded payments.

Consideration of sensible upper bounds also resolves the finite versions of the St Petersburg paradox mentioned by Bill. For example, given risk-neutrality and an upper bound equal to world GDP for the next 100 years, the St Petersburg bet is worth around $35.

These limiting arguments don’t apply to heaven and hell, or to God, and I’m still thinking about the implications of this.

14

Bill Carone 05.26.04 at 10:22 pm

Armando, thanks for the response.

“Nasty, icky, paradoxical infinities?”

Sorry, little joke there :-)

“Aren’t you dismissing a great deal of mathematics and raising slightly old fashioned concerns?”

I am old-fashioned; I am following Gauss and not Cantor (I think). Right now, I think Gauss has it right, and when I use his methods on problems, I get intuitive, non-paradoxical results. I am also an engineer; we are famous for not wanting to sum all the way to infinity, so we pick a number close to infinity instead. Say, five :-)

I’ve been re-reading some of my old analysis texts, and they seem to bear me out in many places. For example, infinite sums are defined as limits of partial, finite sums, not as sums directly over infinite sets. So I’m not sure I am dismissing as much of mathematics as you think.

Another thing; I don’t mind people studying infinite sets, or non-conglomerability, or whatever floats their boat. However, when they start saying that decision theory can’t deal with infinite sets, then I disagree; it can, as long as the infinite sets are represented by well-behaved limits of finite sets.

“Not that there aren’t paradoxes, or that infinities don’t get icky, but you don’t want to throw the baby out with the bathwater.”

Here is my position, and I would love to hear counterexamples: The baby consists of all the results where taking the limits give the same answer as using the infinite sets, and the bathwater is where taking the limits give a different answer.

I am happy to use infinite sets when they happen to be a good shortcut to the annoying, complicated, difficult limits :-); in fact, there are problems where taking the limits and dealing with large finite sets would be practically impossible, whereas using the infinite sets directly gives the correct answer in one line.

But once the two disagree, then I go with the limits. I have never seen a paradox survive this; the mathematics either refuses to give an answer or gives the correct, intuitive answer.

Again, I would love to see counterexamples: examples of calculations on infinite sets that either cannot be defined as limits or ones where we would seriously misunderstand them by using limits instead of the infinite sets directly.

15

Glen 05.26.04 at 11:05 pm

When I first encountered the envelope-switching paradox in grad school, my professor said (and I still agree) that the moral of the story is this: “You cannot have no beliefs.” In this case, it’s ridiculous to suppose that any pair of the form (X, 2X) is just as likely as any other. Whether it’s your grandmother or an angel who offers you the envelopes, you must have some prior about how the amounts were selected. And with any reasonable prior (and I would construe “reasonable” to rule out most any infinity), the paradox dissolves. It will turn out that it sometimes makes sense to switch, but not always.

16

armando 05.26.04 at 11:23 pm

Bill.

I’m not sure what you mean when you are using the term “infinite sets”. If you feel happy talking about real numbers and calculus, say, then you have already passed to some sort of infinity (quite a non-trivial sort, from a certain point of view). And thats at a fairly elementary level.

You are right about limits, on the whole, but I get the feeling you are pushing an open door.

You want to reject infinities, except when there is a reasonable process involved. As opposed to a mathematician, who happily accepts infinities, but is careful to treat them right. I’m not sure I see the distinction.

17

Bill Carone 05.27.04 at 12:54 am

“If you feel happy talking about real numbers and calculus, say, then you have already passed to some sort of infinity”

Right; for example, I see real numbers as limits of rational numbers with finite decimal expansions. I see integrals as limits of sums (lots of different ways to take the limit = lots of different types of integrals?). I see derivatives as limits of differences.

“You want to reject infinities, except when there is a reasonable process involved. As opposed to a mathematician, who happily accepts infinities, but is careful to treat them right.”

Perhaps there is no difference; that would be great, as it would mean that I’m not as stupid as I appear :-) Do the two methods match? Or are there things that are infinite and can’t be modelled as limits, even in principle?

My claim then would be that Brian is not treating infinities right, since he is getting non-intuitive, paradoxical answers and I, taking limits, am getting non-paradoxical, intuitive answers. Do you disagree?

18

Matt Weiner 05.27.04 at 3:46 am

These limiting arguments don’t apply to heaven and hell, or to God, and I’m still thinking about the implications of this.
Well, that’s what I meant by saying that the upper bounds were contingent but not very contingent. If you’re dealing with things that can be attained in This World, then it’s easy to cap the number of coin flips as you point out. The theology involved is also pretty dubious, as has also been pointed out. So it’s not a decision we’re likely to face.
So… economists probably don’t need to worry about this problem much. But it does raise a theoretical issue, which is what I’m interested in.
Though I’m not that interested in the St. Petersburg per se–what I’m interested in is using it as a proxy for decisions in the face of uncertainty. Maybe Glen is right and we always have to have some prior or set of priors, but I think that this topic is underexplored.
Bill–
I’m just not convinced that the various setups Brian described are ill-posed (as opposed to very very very unlikely to be encountered). Take the one in which God makes each flip take 1/2 as long as the one before. Not only will this procedure be done within a second–with probability 1 only a finite number of flips will be involved. Even if infinities aren’t involved, I don’t see why God can’t just carry out this procedure: Flip the coin increasingly fast until it comes up tails, then count how many flips it took to come up tails. The case in which it never comes up tails doesn’t seem to be driving the paradox, since the paradox arises no matter what value we give to that case.

19

armando 05.27.04 at 11:12 am

Bill,

Well, I think that you have a slightly odd view of this. I’m not sure how you can be consistent in accepting the reals and rejecting measures and probability. Certainly an appeal to limits is insufficient, since these things tend to be defined quite rigorously via limits anyway.

20

Bill Carone 05.27.04 at 3:03 pm

“I’m not sure how you can be consistent in accepting the reals and rejecting measures and probability.”

I’m not sure what you mean when you say I am rejecting measures and probability.

If these are defined as limits, then I accept them perfectly well, especially when they give the same answers as limits and give them faster and more elegantly.

21

Bill Carone 05.27.04 at 3:33 pm

Matt,

“I’m just not convinced that the various setups Brian described are ill-posed.”

I don’t think they are, as I said; set both envelopes to N flips and send N to infinity.

“Even if infinities aren’t involved, I don’t see why God can’t just carry out this procedure:”

First, an infinity is involved, even if the outcome “infinity” never happens or is dealt with; there are an infinite number of possibilities. At some point, to calculate the expectation, you must sum over all these possibilities.

Second, all I mean by ill-posed is that we can create more than one mathematical model consistent with the problem statement and different models give different answers.

It is possible that God could do something that we can’t model mathematically (He is God, after all :-). Say He gives us two infinite piles of cash, and asks us which one we should prefer. We can model this by many types of limits (perhaps the first pile is half as big as the second, or vice versa), so we can’t answer; the problem is ill-posed, even though God could do it. It isn’t a paradox; unless God gives us a limiting procedure, we can’t model the problem mathematically to help us make the decision.

This is all assuming you buy my limit story as an accurate way to deal with infinities. Hyperreals give the same answer (N is less than 2N even when N is infinite). Cantor would not (unless one infinity was countable and the other wasn’t).

I don’t think Brian’s problem is ill-posed, as I think there are strong symmetry arguments for a particular limiting procedure. All the better that it gives the correct, intuitive answer.

“Maybe Glen is right and we always have to have some prior”

In probability, we always need a prior when we deal with information gathering; it would be strange if in this case of information gathering we didn’t.

So, when we take a 99% accurate test for a disease, we need to know our prior chances of having the disease in order to interpret the results of the test.

The problems happen when trying to decide what distribution to assign to represent our prior information (these threads has been about a prior that represents the St. Pete’s Paradox). Indifference, maximum entropy, transformation groups, and marginalization are all ways to do so, but there is a lot more research to be done.

22

Bill Carone 05.27.04 at 4:05 pm

Armando,

“I think that you have a slightly odd view of this”

Perhaps you can shed some light on this, so I can understand where I am going wrong.

Setup:
Two envelopes, A and B. A pays off

$2 with p=1/2
$4 with p=1/4
$8 with p=1/8

B pays off

$1 with p=1/2
$3 with p=1/4
$7 with p=1/8

We are risk neutral over dollars, all we care about are expected values.

Brian’s view:

Start with the idea that A and B both have infinite expected value, then see where that leads us.

Paradox 1: A stochastically dominates B (no matter what amount of money, A is equally or more likely to exceed it than B) so we should prefer A to B. However, both have infinite expected value, so we should be indifferent.

Paradox 2: Say we look in B and find $x. No matter what x we observe, we prefer A, with its infinite expected value. So, no matter what is in B, we prefer it to A, so we must prefer B to A. However, the same argument works the other way, so we must also prefer A to B.

Bill’s view:

Look at each envelope, instead of having an infinite number of payoffs, as having N possible payouts. We will calculate all the way through to the conclusion in terms of N, then take the limit as N increases indefinitely.

Non-paradox 1: The expected value of A is $N and the expected value of B is $(N-1). E(A) exceeds E(B) no matter how large N gets, so in the limit as N goes to infinity, E(A) exceeds E(B), so A is preferred to B.

Non-paradox 2: If we look inside B and see x, then any time we see x greater than N, we shouldn’t switch. No matter how large N gets, there are always possibilities when we shouldn’t switch (max payout is always 2^N which is greater than N). So there is no “sure-thing” type of argument as above.

Armando’s view:

In your view, what mistakes are we making so that our answers are so different?

You might say that we are solving different problems, but you said before that you didn’t see the distinction between carefully used infinities and limits.

23

armando 05.27.04 at 4:47 pm

You are adressing a different mathematical problem which you feel is more in the spirit of the question raised. Mathematically, this is a non-issue, except in a modelling sense. Your gripe doesn’t seem to be about infinities, per se, but the appropriate use of infinities in certain problems. Bill’s paradoxes lend some weight to this view, but otherwise I don’t really see the problem.

(For instance, I think that paradox 2 relies on an intuitive use of “expected” that isn’t supported by the maths.)

24

Matt Weiner 05.27.04 at 10:37 pm

Second, all I mean by ill-posed is that we can create more than one mathematical model consistent with the problem statement and different models give different answers. [etc.]
I don’t have any real objection to this statement; the question is going to be whether your technique exhausts the ways we can model the problem. (Keep in mind that you and Brian agree at least in part about what goes wrong with the original St. Petersburg two-envelope problem; it involves rearranging the terms of a divergent sequence.) One thing is that I’m skeptical that there is any way of specifying the problem-solving procedures in advance that won’t exclude some intuitively obvious answers (or include some paradoxes). In particular, I’d like some more information about when we take the same N for two different limits–here there are obvious symmetry considerations, but can those considerations be specified more generally?

I also have a new problem:
God puts a number in each envelope (without subtracting 1 from the second), representing days in heaven (so you want to maximize your number). He then gives you one envelope, and the following offer:
Before opening the envelope, you can make the decision in advance to keep what you have. He’ll then destroy the second envelope.
Or, you can retain the option to buy the second envelope for 1 day. So after you open the envelope, you’ll have the option to get the what’s in the second envelope minus 1 extra day in heaven instead of what’s in the first envelope.
Now, I think it’s already established that for any k in the first envelope, if you’ve retained the option to switch you should. (As N, the number of flips, goes to infinity there will be an N past which it’s advantageous to switch.)
I also think it’s intuitively obvious that, if given this option, you should decide in advance to keep the number in the first envelope. If you know you’ll switch no matter what you see in the second envelope, then keeping the option is giving yourself a St. Petersburg -1 instead of a St. Petersburg.
But–I think if we model this with both coin flips capped at N, for all finite N we’ll find that it’s advantageous to retain the option to switch. There will be values in the first envelope for which you don’t want to switch, and it’ll be straightforwardly advantageous to retain the option to switch.
Opinions?

25

Bill Carone 05.28.04 at 6:05 am

Matt,

“I think if we model this with both coin flips capped at N, for all finite N we’ll find that it’s advantageous to retain the option to switch. There will be values in the first envelope for which you don’t want to switch”

True, and in fact the number of such “non-switch” values increases as N increases (although the probability of any of these occurring decreases).

However,

“(As N, the number of flips, goes to infinity there will be an N past which it’s advantageous to switch.)”

In other words, any particular value k will, as N increases, be one of those that make you want to switch (a “switch” value).

Uh oh, we’ve been here before; this is similar to the infinite ball and urn example. There, the number of balls in the urn increases with N, but any particular ball eventually gets taken out as N increases. Here, the number of “non-switch” values increases with N, but each particular value eventually becomes a “switch” value as N gets high enough.

You see now what I will say; take the limit at the end; there is no “sure-thing” argument that lets you say

“you know you’ll switch no matter what you see in the second envelope,”

So I don’t think it is established that no matter what k is, you’ll want to switch. In fact, for any N no matter how large, there are many possible values of k that will be “non-switch” values. This is similar as Brian’s Paradox 2 (and Bill’s Non-paradox 2) above.

So the above dominance argument that you use doesn’t work, and therefore it isn’t intuitively obvious that you shouldn’t take the option.

26

Jamie 05.28.04 at 1:58 pm

It’s not clear to me that human beings do have bounded utility functions. Is there an argument that we do?

“I don’t think there’s anything contingent about the fact that we are finite beings with a finite capacity to consume (earthly) goods and bads. So, for the purposes of decision theory, I don’t think there’s anything problematic about rejecting unbounded payments.”

But utility is not a matter of one’s capacity to consume. It is a matter of one’s preferences. I very often have preferences over things I do not, will not, cannot consume.

If your utility is bounded, that doesn’t necessarily mean that you have a most preferred prospect, but it means you can find a prospect such that there is no prospect you prefer to it by much (say, by more than you prefer one extra drop of espresso in this morning’s latte). Just speaking personally, I can’t think of what prospect that would be. For me, I mean.

Does Matt or John Q. have a ‘summum bonum’ in mind?

27

Bill Carone 05.28.04 at 3:21 pm

Jamie,

“Just speaking personally, I can’t think of what prospect that would be. For me, I mean.”

Say I’ll give you $2 or a 50-50 shot at X or nothing. What X makes you indifferent?

Repeat (I’ll give you X or a 50-50 chance at Y or nothing, what is Y?). Most people find that, at some point, they can’t think of an amount of money that is worth the risk (or they realize that there just isn’t that much material wealth in the world).

When that happens, start putting in other prospects instead of money (longer life, better quality of life, helping others, curing diseases, etc.)

This is a quick way to see where your own personal “boundedness” starts to show. At some point it becomes very difficult to see what better prospect would be worth the risk, or to even think about the prospects rationally.

This isn’t a proof, but an experiment I find interesting.

28

Bill Carone 05.28.04 at 3:51 pm

Matt,

“One thing is that I’m skeptical that there is any way of specifying the problem-solving procedures in advance that won’t exclude some intuitively obvious answers (or include some paradoxes)”

My position (probably wrong, but FWIW):

– Paradoxes won’t happen as long as you treat infinities as limits and carefully specify how the limits are to be taken.

– Intuitive answers, when such exist, seem to correspond to the idea of taking any limit as late as possible. In other words, do the entire calculation, all the way to the end, in terms of, say, N, then let N go to infinity.

– When limits can be taken many different ways (even when we take them as late as possible) and different limits give different answers, the problem is ill-posed, and we should refuse to give an answer in those cases.

I would probably agree that, when infinities are involved, there is no method that _guarantees an answer_ that is _non-paradoxical_ and _intuitive_.

29

Matt Weiner 05.28.04 at 6:46 pm

Jamie–
No summum bonum in mind; actually I reject several of the Savage axioms (commensurability and the sure-thing principle, for two), which means I don’t think utility functions are a good way to discuss rationality at all, so I’m perhaps not the best person to discuss this. If we’re talking about conceivable prospects, I suppose I prefer N+1 days in heaven to N days in heaven for any N; but I prefer being in heaven forever to anything else, which is a summum bonum; but the way to treat that is probably by assigning it infinite utility (sorry, Bill), so that won’t work much. Next, I prove that whatever the summum bonum is, a ham sandwich is better.
–But anyway, it seems to me that if you’re an economist, you may only need to consider preferences that we could realistically encounter, and that those wouldn’t lead to unbounded utilities. I can’t quite spell it out, but maybe my living for 150 years and being happy all the time and world peace being attained and a few other things would serve as a cap with respect to the choices anyone would reasonably take.
Bill–
ISTM that the result of the “no sure thing” argument is equally paradoxical to anything we might come up with. The argument is supposed to yield that it’s not true that for all k you see, you’ll want to switch. But in the set-up I posed–which is not finite N, it’s meant to be the limit case–what k would that be? There aren’t meant to be any hyperreals involved, so I require a lot of argument to be convinced that the answer can be a hyperreal.

30

Jamie 05.28.04 at 7:28 pm

Bill,

At some point it becomes very difficult to see what better prospect would be worth the risk, or to even think about the prospects rationally.

To the contrary, it seems to me that it is very difficult to see what the upper bound is supposed to be such that no matter what you were offered, you would not risk that upper bound for a 1/2 chance at the offer. Can you think of a plausible candidate?

31

Jamie 05.28.04 at 7:36 pm

Matt,

But anyway, it seems to me that if you’re an economist, you may only need to consider preferences that we could realistically encounter, and that those wouldn’t lead to unbounded utilities.

Ah, maybe so. Maybe even if you’re not an economist!

Still, we want to be very careful here. If there are some goods that are really enormous compared to the ones we usually choose among, then the fact that they are also incredibly unlikely is not a reason to ignore them. ‘Unrealistic’ has to mean not just unlikely but ignorable. It’s not obvious that an upper bound on utility is established by restricting our attention to the realistic cases in the relevant sense.

32

Bill Carone 05.28.04 at 8:15 pm

“it seems to me that it is very difficult to see what the upper bound is supposed to be such that no matter what you were offered, you would not risk that upper bound for a 1/2 chance at the offer.”

What usually happens is that people grind to a halt, then come up with another idea and go off again. So they start with money, go on to length of life, then to quality of life, then to helping others. Even that grinds to a halt, as you start creating planets upon planets of happy people.

The difficult points are different for different people. All I was saying was that if you want to see what people are talking about when they make claims about bounded utility, this exercise helps. I don’t think it proves that our utility is bounded, but it produces interesting internal reactions when we say to ourselves, “Gosh, I can’t think of anything.” for at least a moment or two.

33

Bill Carone 05.28.04 at 8:49 pm

Matt,

“But in the set-up I posed—which is not finite N, it’s meant to be the limit case—what k would that be?”

Again, this is what I have a problem with. You are saying “First I’ll take the limit as N goes to infinity, then I’ll figure out what k is.” It’s very similar to “First I’ll put in and take out an infinite number of balls into the urn, then see if ball k is still in there.”

I say, go all the way to your conclusion first. So, don’t switch if k exceeds N. As N goes to infinity, the number of “no-switch” possibilities increases (for example, when N=100, there are only 6 possibilities where you switch, and 94 when you don’t; the formula is something like N-log2N).

So, you never can say that for all possible k you will switch, so the paradox disappears. You can’t argue that you should always switch, and therefore you are trading (St.Pete) for (St.Pete-1).

I might also argue with the intuitiveness of not buying an option to switch; in most decisions, it would be valuable to have a “do-over.” For any N>3 or so it is worth it, so it should be the same in the limit.

If I calculated it right, the actual value of the option diverges as N increases, so it is hard to conclude much about it; as the possible number of flips increases, the value of the option increases.

One way to look at my position is to compare it to limit when N goes to zero. We know that, in the limit, f(N) doesn’t have to go to f(0); there might be a discontinuitiy there. One might argue the same thing about infinity, that the limit of f(N) as N goes to infinity doesn’t necessarily equal f(infinity).

My position (and, I think, Gauss’s) is that that is an error; there is no such thing as f(infinity), it is defined as the limit of f(N).

“There aren’t meant to be any hyperreals involved, so I require a lot of argument to be convinced that the answer can be a hyperreal.”

My understanding is that in 1960 or so, someone rigorously established a set of hyperreal numbers; these could be used to model infinite and infinitesmal numbers in a rigorous manner. For example, there is no nonsense like N=N+1 when N is infinite; N+1 is greater than N, even though both are infinite.

So, you have actually “introduced hyperreals” by insisting that we start with infinite N, then figure out k. Therefore the answer is simple; don’t switch when k exceeds N. Not possible because N is infinite? Nope; k can be as big as the infinite number 2^N, which is greater than N.

As an aside, the 2^N has been bothering me; even according to Cantor, 2^N is bigger than N when N is infinite, right? So perhaps even Cantor would agree: in the limit there are an uncountable number of “no-switch” values and only a countable number of “switch” values. However, even if this is true it doesn’t matter; you can create, I think, a similar St. Pete’s paradox using a divergent harmonic series instead of the 1+1+1+1+… that wouldn’t suffer from this.

34

Bill Carone 05.28.04 at 8:53 pm

It seems as if some of my post has been garbled; some of the 2-to-the-N have been printed as 2N. Please take that into account, especially in the aside at the end.

For example:

“As an aside, the 2N has been bothering me; even according to Cantor, 2N is bigger than N when N is infinite, right?”

should be

“As an aside, the 2-to-the-N has been bothering me; even according to Cantor, 2-to-the-N is bigger than N when N is infinite, right?”

35

Matt Weiner 05.30.04 at 9:03 pm

Jamie wrote in response to Bill:

it seems to me that it is very difficult to see what the upper bound is supposed to be such that no matter what you were offered, you would not risk that upper bound for a 1/2 chance at the offer.

I’m not sure that Bill or I needs to be able to answer this question, though. I certainly can’t think of anything that meets this criterion. But I think, to get the result that utilities are bounded, I only need the result that there is some thing such that, if I were offered it, there isn’t anything else that I would risk for a 1/2 chance at it. I can’t say what that is because I can’t survey all offers in advance–but since I can’t survey all offers in advance, it might be that I was in possession of this summum bonum without knowing it.

That is, if I have X, and I wouldn’t in fact prefer X to a 50% chance of Y for any Y, I might not know this because I can’t run over all the Y in my head. It seems likely to me that there is some such X, but I couldn’t tell you what it was.

Two or three provisos: Actually, what I think is something more like: utilities/day are bounded. For any X I might well prefer a 50% chance at the same thing, lasting more than twice as long.

Also, I mentioned above that I don’t accept commensurability, and it seems to me that what happens when we double a certain number of times is that we invoke factors that I might not find commensurable. Would I prefer a 50% chance at spiritual enlightenment to guaranteed success in all my worldly endeavors? I’m not sure. (I think I got this from Putnam.)

(And: Nothing is better than the summum bonum. A ham sandwich is better than nothing. QED.)

36

Matt Weiner 05.30.04 at 9:12 pm

Bill–
Your math looks sound to me, but it still seems to me to have paradoxical implications. The position you wind up with looks to me like it has a problem akin to w-inconsistency; as N goes to infinity there are more and more non-switch values; however, after you actually open the first envelope, there is no value that is in fact a non-switch value.

That’s the issue I have; you know in advance that when you open the first envelope, it will have a finite number written on it.

For any of these numbers, consider the k-switch problem; if I see k in the first envelope, do I want to switch envelopes? The answer to each k-switch problem is well-defined; there’s only one limit to be taken, and when you take it you discover that it’s good to switch.

So it looks to me as though you can know in advance, for any of the numbers you see when you look in the envelope, that it will be rational to switch once you see it. There are an infinite number of k to be generalized over here, but I don’t see any more problem with that than with any other proof that applies to all finite numbers.

(Have to finish in a hurry, soory.)

37

Bill Carone 06.01.04 at 4:19 pm

Matt,

“consider the k-switch problem; if I see k in the first envelope, do I want to switch envelopes? The answer to each k-switch problem is well-defined;”

It isn’t as well-defined as I would like, as the limit diverges.

“there’s only one limit to be taken, and when you take it you discover that it’s good to switch.”

All I would conclude is that, as N increases, eventually it becomes good to switch. The limit doesn’t exist.

So you can’t take the limit to get rid of N. If you can’t get rid of N, the dominance argument fails.

If you could show that, as N increases, the number of switch values goes to zero, we would be on to something, I think. Then the dominance argument might work.

Comments on this entry are closed.