Max Weber’s Newcomb problem

by Chris Bertram on October 25, 2012

I was reading a postgraduate dissertation on decision theory today (a field where I’m very far from expert) and it suddenly occurred to me that Max Weber’s Protestant Ethic has exactly the structure of a Newcomb problem.

Consider: in the classic Newcomb problem a being, which always guesses right, offers you a choice involving either taking a box (A) containing $1,000,000 or nothing OR taking that box plus another one (B) which certainly contains $1000. The being guesses what you will do and, if you are disposed to take both boxes (A+B) always puts nothing in A, but if you are disposed to leave B alone and just open A, puts the million dollars in A. But by the time you make the choice, the money is there or it is not.

One apparently compelling argument says you should open both boxes (since A+B > A), another persuasive argument says that you want to be in a state of the world such that the being has put the million in box A. A sign that you are in that state of the world is that you are disposed to open just the one box, so this is what you should in fact do. You thereby maximize the expected payoff.

Now place yourself in the position of Max Weber’s Calvinist. An omniscient being (God) has already placed you among the elect or has consigned you to damnation, and there is nothing you can do about that. But you believe that there is a correlation between living a hard-working and thrifty life and being among the elect, notwithstanding that the decision is already made. Though partying and having a good time is fun, certainly more fun than living a life of hard work and self-denial, doing so would be evidence that you are in a state of the world such that you are damned. So you work hard and save.

To review: you prefer being saved to being damned. This decision has already been take and you don’t know what it is. You prefer partying to hard work. Given that partying is better, whatever state of the world you are in, there appears to be a compelling argument for partying. (After all partying+heaven is better than working+heaven and partying+hell is sure better than working+hell.) But you work hard and reinvest, despite the dominance of partying, because you really really want to be in that state of the world such that you get to heaven.

I’d love to claim originality for noticing this parallel, but it seems that a Lithuanian scholar, Zenonas Norkas, got there already, and has published it in [Max Weber Studies](http://www.maxweberstudies.org/MWSJournal/5.1%201st%20page/006%20Zenonas%20Norkus.pdf).

{ 88 comments }

1

Stuart Presnell 10.25.12 at 2:41 pm

As I recall, the “being” in Newcomb’s problem can be made sense of if we allow her to be freed from the bounds of normal temporal causation — for example, perhaps she is a time-traveller who observes which decision you make, and then visits your past to set up the boxes accordingly. Do Calvinists believe that your status as damned or elect is independent of your actions, or simply determined by them in a non-temporal manner by a timeless God?

2

Eli Rabett 10.25.12 at 2:49 pm

Schroedinger’s cat does morality?

3

Latro 10.25.12 at 2:55 pm

Why would you open both boxes if you KNOW that in that case there is only 1000$? If you are under the impression the thing is random that makes sense, but know as described, unless the problem is “how will people take a decision if I tell them the situation is different from what really is”, in which case it is kind of trivial (“I lied and they fell for it”)

4

The Iron-Tongued Devil 10.25.12 at 3:17 pm

Look, I am not by most standards a stupid person, and your description of the Newcomb problem makes no sense to me. “A sign that you are in that world is that you are disposed…” sounds immediately like theology. Forget about analogies to Calvin. It’s a sign, not evidence. My arbitrary initial disposition about an issue on which I can have no experience can’t be evidence. From the sound of this, I might as well be listening to voices out of whirlwinds. What’s the point of describing that in terms normally used for rational decisions?

I know, I must be missing something, I should google, I should go away and think about it for a while. I will. But if you start getting a lot of nonsensical comments that don’t even address the issue, the explanation of the Newcomb problem might be one of the causes.

5

Chris Bertram 10.25.12 at 3:24 pm

[I’ve amended that sentence to read “state of the world” as elsewhere in the post.]

6

Chris Bertram 10.25.12 at 3:29 pm

Tad Brennan tells me that he has used Newcomb’s problem to illuminate a dispute between the Stoics and their critics, see here:

http://www.oxfordscholarship.com/view/10.1093/0199256268.001.0001/acprof-9780199256266-chapter-16

(The link may not work for everyone).

7

Phil 10.25.12 at 3:45 pm

I don’t know from Newcomb, but Christopher Hill very persuasively analysed Cromwell’s behaviour and disposition in terms of this Calvinist belief-pattern and – for a bonus point – drew an interesting parallel with the historical determinism associated with Marxism: you know that, if historical materialism has any predictive power, the proletariat will rise through class struggle whether you do anything or not, but you want to be in that state of the world where the proletariat is in struggle, is rising and includes you.

8

cs 10.25.12 at 3:52 pm

But if the being in the Newcomb problem is a time traveler, then it is your choice that is determining what is in the boxes, and then there is no paradox at all, just take box A.

9

cs 10.25.12 at 3:57 pm

Or I could say that the argument that we should take both boxes because the contents won’t change after we make our choice is only valid if we exclude the concept of time travel.

(And I know this isn’t what the original post was about at all.)

10

Matt 10.25.12 at 4:01 pm

Do Calvinists believe that your status as damned or elect is independent of your actions, or simply determined by them in a non-temporal manner by a timeless God?

I don’t know about all Calvinists, but at least some important ones (Johnathan Edwards, in the US) believed, as far as I could tell, that God’s choice of who to save or damn was essentially arbitrary, at least as far as we could tell. That’s in a strong sense, too- if it’s intelligible in any way, it’s not in a way that beings like us can make any sense of at all. It’s certainly not determined by our actions in any way at all. Plus, all of us are equally revolting to God- he looks upon us as we look upon the most loathsome spider, if I recall the formulation correctly. (Perhaps that’s part of the reason Edwards thought the main pleasure of heaven was watching those in hell suffer, though perhaps he just thought that was intrinsically fun.) The basic and quite common Christian idea of salvation by grace provides the essentials for this view (no one could deserve to be saved, so being saved is a capriciously given benefit) but the Calvinist view, especially in the form pushed by Edwards in the US, just takes this a bit further.

11

CJColucci 10.25.12 at 4:05 pm

Stuart:
I am no authority on Calvinism, but I think Calvinists would reject the “predictive” model of pre-destination, because it sounds as though God knows in advance what people will deserve salvation, andacts accordingly. But as I understand Calvinism, no one can deserve salvation. God makes what, to human understanding, is an arbitrary choice of which miserable sinners, who all deserve nothing but Hell (Hamlet famously said if we were all treated as we deserve none would ‘scape whipping), win the afterlife lottery. Thus changing the question from: why do bad things happen to good people to how come these assholes get a break and I don’t?

12

mpowell 10.25.12 at 4:15 pm

I don’t see the parallels. The way you have described the first scenario the only goal is getting the biggest reward. In the second scenario the issue is that the process leading up to the reward has itself different values. Put it this way: it’s not like I would enjoy opening both boxes.

13

Prosthetic Conscience 10.25.12 at 4:18 pm

They talk a lot about Newcomb’s Paradox over at Less Wrong. They appear to have an almost religious conviction that one-boxing is the correct answer, even though the kind of commitments they have to make to do this make them vulnerable to things like Counterfactual Mugging and Roko’s Basilisk. Needless to say, they are very strange people, but the link to Calvinism actually makes them a little more comprehensible to me.

14

Scott P. 10.25.12 at 4:37 pm

Why would you open both boxes if you KNOW that in that case there is only 1000$? If you are under the impression the thing is random that makes sense, but know as described, unless the problem is “how will people take a decision if I tell them the situation is different from what really is”, in which case it is kind of trivial (“I lied and they fell for it”)

It’s much more complicated than that. Consider a variant in which your best friend gets to peek in the boxes before your selection. He can’t tell you whether the $1 million is there or not, but he can suggest which choice you should make. He will, invariably, suggest that you take both boxes.

15

Christopher Stephens 10.25.12 at 4:41 pm

Chris,

The Cavlinist example is a standard Newcomb example used in decision theory texts such as Resnik’s book (widely used by philosophers, at any rate), Choices. I don’t know if he cites anyone for the example, but his book is from the 1987, and the example is on p. 111. So it predates Norkas by quite a bit.

Resnik doesn’t mention the connection to Weber, though. But my guess is that is where he got it from.

16

Random Lurker 10.25.12 at 5:00 pm

My knowledge of Calvinism is very limited (actually almost nonexistent), but I always believed that the causation ran in the other direction:
Some people are born “pure”, and will be saved at the end; some other are not pure and will not be saved.
The people who are pure have not wordly desires (since they are pure), and as a consequence they will live a pure life; but the pure life is a consequence of the pure soul, not the other way around, hence the parallel breaks down.

Writing this, I realize that I’m attributing to Calvin a worldview that is very similar to that of the Catars and of ancient Gnostics. Can someone knowledgeable of Calvinist theology tell me if this is right?

17

Ethan 10.25.12 at 5:11 pm

No, Random Lurker, that’s completely and totally wrong. Both Calvin, Luther, and Roman Catholicism deny the possibility of any person avoiding damnation by avoiding sin, it’s why the question “why would God damn good non-Christians?” is irrelevant in orthodox christianity. There are no people, christian or otherwise, good enough to deserve heaven, therefore a “good non-Christian” isn’t being deprived of anything that they deserve.

18

Eli Rabett 10.25.12 at 5:18 pm

OK, more seriously, it depends on how much you need that $1000. If you have a secure job, an inheritance, some wealth, the answer is clear.

19

Both Sides Do It 10.25.12 at 5:51 pm

I never understood the original paradox as a dilemma in deciding which box to choose.

There are always and only two outcomes: choosing box A, with a million dollars in it, or choosing both boxes, with nothing in box A and a thousand dollars in box B. The scenario “choosing both boxes, with a million dollars in box A and a thousand dollars in box B” is explicitly ruled out by the hypothetical premise.

Even granting that the paradox exists, I’m not sure I get the Calvinist analogy either. I’ve always understood Calvinist predestination as essentially bog standard “do good and you go to heaven, do bad and you go to hell” but in a package that attempts to answer the question “how does free will exist in a universe where God has perfect knowledge of everything?”, since if God knows what you’re going to do, well, you don’t really have any choice in doing it, do you.

Predestination is saying “God has perfect knowledge of the choices that you freely made, and also perfect knowledge of the result of those choices, even before you made them.” “If you party you go to hell, if you do good you go to heaven” is functionally the same as “God has perfect knowledge of your choices of whether to party or do good, and also perfect knowledge of whether you end up in heaven or hell, even before you are born.” There aren’t different implications for different choices, as there are in the Newcomb problem.

20

Bloix 10.25.12 at 5:53 pm

If you say that “the being always guesses right,” the being is not guessing. The being knows what you are going to do, which means that your sense that you are making a choice is an illusion arising from your limited powers of perception. The future is already determined, and your belief that you can change what will occur is an error on the level of believing that you can alter the landscape over the hill ahead of you simply because you cannot see it yet.

So it makes no sense to analyze what you “should” do. You are not in control of what you will do; you are merely along for the ride.

This is the conceit of Slaughterhouse-Five, IIRC.

Similarly, the Calvinist knows that he cannot influence whether he is saved or damned. But he believes that the elect, as they say, have certain characteristics in this life, and he sees that he has them. That is, he trusts in God, works hard and saves, and avoids sin. These are signs that he and others like him are saved. There is no voluntary causal relationship – he cannot choose to behave in a way that will bring about his salvation, any more than an ugly person can choose to be an attractive person. His behavior is merely an indication that he is among the elect.

21

Patrick 10.25.12 at 5:59 pm

Put like that…the Calvinist position makes a lot more sense.

The being always guesses right. If you pick box A alone, you’re guaranteed to get $1M, because the absence of the $1M would mean that the being that always guesses right, had guessed wrong. (or in the Calvinist analogy, that god is not omniscient) The confusion here is only that from your perspective you appear to have free will, but from the omniscient being’s perspective you do not. But the fact that two different viewers see things differently is pretty much the definition of perspective, thus not particularly problematic, IMO.

Put in Calvinist terms, if you behave morally your whole life and you don’t get sent to heaven, it would mean god had made a mistake, that he is not omniscient, and Calvinists are certain of god’s omniscience.

22

Mike 10.25.12 at 6:06 pm

If it’s merely an indication, who does it indicate to? Presumably the elect themselves–in which case it’s a nice confidence boost, I guess–and to other humans, in which case it’s status signalling. But surely no true elect would behave so vulgarly.

23

Trader Joe 10.25.12 at 6:17 pm

I thought a feature of the Calvinist orthodoxy was that – while the person was either Elect or non-Elect from birth the a) didn’t know it and b) could screw it up even if they were Elect.

As such, in the analogy – since one doesn’t know if they are Elect or not – being good is the only path (i.e. if you are good+ Elect= heaven, but bad+Elect= Hell. Non-elect don’t get heaven no matter what they do, but at least by being good they profit from doing good).

This better parallels the Newcomb problem with taking box A being = to elect +good and taking box B as = to elect + bad (i.e. the ‘being’ knows your heart not to be pure). The non-elect would always lose either getting an empty box A or at best getting an empty A + $1000 for being non-elect, but good.

24

Chris Bertram 10.25.12 at 6:22 pm

Christopher Stephens @15 – thanks, I wasn’t aware of Resnik’s book.

25

David Moles 10.25.12 at 6:37 pm

Does Newcomb’s being know that I know about Newcomb’s problem?

26

David Moles 10.25.12 at 6:43 pm

And has Newcomb’s being read Brennan’s paper? Because if so, it’s pretty clear there won’t be a million dollars in Brennan’s box.

27

tomslee 10.25.12 at 7:02 pm

I now realize that I always thought what Bloix said.

28

UserGoogol 10.25.12 at 7:03 pm

Bloix: It doesn’t necessarily have to be deterministic. If the being can predict your actions with merely very high probability, then there’s still a pretty plausible argument for the one box option, as long as you adjust the payoffs such that the possibility of the being guessing wrongly and putting a million dollars in the box when you pick two is inconsequential enough to not throw off the overall calculation.

But even if it is deterministic, I don’t think you’re right anyway. After all, we can ask “should Napoleon have invaded Russia?” even though it’s already set that he did. Similarly, we can ask how a person in a Newcomb problem should act, even if it’s already set how they will act. It’s just comparing the apparent options. (Yes, there’s a distinction between “should” and “should have” but I think it’s largely grammatical.) Or, to phrase it another way, we could ask how an “ideal person” would act if posed with such a problem, since how a person should act and how an ideal person would act seems logically equivalent.

29

Alex 10.25.12 at 8:00 pm

If you say that “the being always guesses right,” the being is not guessing.

This is a good point. I can guess that a dropped stone will fall with 100% confidence, while I am standing on the surface of the earth. I don’t guess it, meaningfully – I know it.

30

bianca steele 10.25.12 at 8:04 pm

Seems to me the answer depends in part on whether the being is really another player in b’s own right. Given the grid, if the game is against another player, the answer is always take both. But the feedback loop, in effect, means you’re almost playing against yourself. So you can choose the world where the choices are all >=$1mil, and you take only one box. (Instead of asking (see), do I want 1&3 or 2&4, ask do I want 1&2 or 3&4.)

But that assumes something about the being. If you know nothing about the being, and don’t know whether b will expect you to mechanistically apply an inappropriate strategy, or whether b will predict “greedy and greed=2 instead of 1,” or whether b will predict “greedy and greed=world where all choices >=$1mil,” or what b’s criteria are going to be at all, there’s a good chance (even if you think you’re going to choose one box) that b will say you were going to choose two boxes. You’re basically playing against an ordinary opponent who might take either branch. Then you should take both boxes.

Or, obviously, if you think the being is probably God, and you know what God would say, then you should decide accordingly.

UserGoogol’s explanation is interesting, but seems to imply that someone playing a game like this can plausibly decide to do one thing and really do a different thing. (I mean, last night I decided to wash clothes in cold water but washed them in warm water instead, which presumably makes me not-elect, but I haven’t worked out whether this fits the problem.)

31

Barry 10.25.12 at 8:05 pm

Mike 10.25.12 at 6:06 pm

” If it’s merely an indication, who does it indicate to? Presumably the elect themselves–in which case it’s a nice confidence boost, I guess–and to other humans, in which case it’s status signalling. But surely no true elect would behave so vulgarly.”

It’s basically reassuring all of those Swiss burghers that they’re on top in this world and the next, and that they’re Special(lly Beloved) in the eyes of God.

In short, Calvinism is a heresy.

32

Harold 10.25.12 at 8:08 pm

I don’t think even the Swiss believe in predestination anymore. The tendency among Calvinists everywhere since the 18th c. was to move toward Unitarianism.

33

Josh G. 10.25.12 at 8:22 pm

Harold: “I don’t think even the Swiss believe in predestination anymore. The tendency among Calvinists everywhere since the 18th c. was to move toward Unitarianism.

It’s not clear to me how Unitarianism (rejection of the doctrine of the Trinity) is inconsistent with classic five-point Calvinism. Sure, you could quibble with the mechanics of atonement if Jesus is not exactly the same as God, but that’s a solvable problem even within the Calvinist framework with a bit of hand-waving.
Or did you mean universalism (i.e. the belief that everyone, not just an elect, will be saved)?

34

Christopher Stephens 10.25.12 at 8:39 pm

While Resnik doesn’t mention anyone for his citation of the Calvinism example, I just checked Richard Jeffrey’s The Logic of Decision, and Jeffrey mentions Jonathan Edwards’ grappling with predestination as a theological version of Newcomb’s paradox.

35

Harold 10.25.12 at 8:59 pm

I am unfamiliar with the mechanics of Christian salvation and what the Swiss replaced predestination with I do not know. But I do remember reading this. The fact is that predestination is repugnant.

In New England, people simply began calling themselves Unitarians.

36

Alex SL 10.25.12 at 9:07 pm

See, this is the kind of thing that gives theology and certain flavours of philosophy a bad rap. To take this serious to the point of publishing a whole book about it when there is no evidence whatsoever that “God” or “salvation” is a more coherent concept than “zargleblorf” is just silly.

37

Watson Ladd 10.25.12 at 10:06 pm

Scott P: Would you tell Omega you two-box?

38

Random lurker 10.25.12 at 10:48 pm

@Ethan 17
Thanks for the answer.
However while I understand that nobody really deserves salvation in the strict sense, the formulation of other commenters (such as Bloix @20 for example) are similar enough to my understanding that I will assume I was more or less right (although “pure” was an excessive word).

39

Mao Cheng Ji 10.25.12 at 11:08 pm

All I know is: you should always choose the other door.

40

Frank Ashe 10.25.12 at 11:15 pm

IIRC Martin Gardner in Mathematical Games in Scientific American sometime in the 1960’s quoted Isaac Asimov as saying he’d choose only box b so as to completely fool the omniscient being.

41

Jeff R. 10.25.12 at 11:17 pm

I sort of thing that there’s a hidden issue here, in that for a whole lot of people, accepting the existence of a perfect-predictor Omega has a negative utility considerably greater than the million dollars and so the value of the ‘impossible’ $1,001,000 outcome is actually orders of magnitude higher than just the money involved. So even if there’s the remotest chance of beating Omega, they have to go for it…

42

chris 10.26.12 at 12:58 am

If you believe the being can violate causality, you should choose only box A. If you believe the being can’t violate causality, you should choose both boxes. Because for your choice to affect the being’s choice would violate causality — it’s part of the setup of the paradox that the being has already chosen and filled the boxes. The core question is whether you believe the being’s predictive powers are *literally* supernatural — so that the decision you *actually* make (as opposed to just the decision more consistent with your overall personality) affects the being’s choice.

Suppose the being knows that I’m a skeptic — that isn’t so hard to know. So he leaves box A empty. What good would it do me to play against type and pick only box A? Sure, I’d prove the being’s fallibility, but if I care more about that than $1000, then I’m not really playing Newcomb’s paradox, I’m playing some kind of variant with a different payoff matrix.

On the other hand, if my alternate universe twin is a man of faith and the being knows *that*, then my twin will presumably talk himself into taking only box A — but I can see that he’s giving up an easy extra $1000 by doing so.

What makes it seem paradoxical is that my rationality appears self-defeating — but really it isn’t. My twin doesn’t benefit from his *actual* irrationality — in fact, it costs him $1000. What he benefits from is his *reputation* for irrationality (in the eyes of the being). But that just makes Newcomb’s Paradox a game where a reputation for irrationality can be an asset — that’s not so unusual. Chicken has the same property.

P.S. If I estimated a substantial probability of someday being faced with an actual Newcomb’s Paradox, would it be rational to cultivate a public persona of faith in uncanny powers like the being’s? Does the effort and cost involved in doing that outweigh the benefit of having the being fall for the persona if I actually do encounter it? Of course, I’d have to consider the possibility that my ruse might fail, making it pointless to have ever attempted it.

43

UserGoogol 10.26.12 at 4:02 am

chris: It doesn’t violate causality, it probably violates (libertarian) free will, but not causality. Your actions are caused by your psychological properties, the prediction is caused by your psychological properties, so all the causation is pointing forward.

And hey, I thought we were supposed to hate libertarians here. :) The idea that people have psychological dispositions but they can choose to turn around and change their mind at the last minute and do the other thing just seems weird. Your decision to “change your mind” is itself motivated by who you are. (Although what would pose a problem for the predictor would be people predisposed to choose in a mindlessly random fashion by flipping a coin or something.)

44

David Aronson 10.26.12 at 6:05 am

Calvinist theology can be thought of more simply as sort of reverse thinking: Sick people take drugs. If I don’t take drugs, I’m not sick. The Calvinists thought: Those graced by God are often rewarded in this life. If I acquire that which makes it seem as if I have been rewarded, then God has graced me.

45

bad Jim 10.26.12 at 8:50 am

As Bloix noted above, this isn’t really complicated, and Vonnegut adequately explained it: “The moment is structured that way.”

Whether or not you have free will and can do what you want, God is waiting at the end of time and already knows what you will do. He already knows how everything will happen. Your future is in His past.

It ought to be obvious that this view of life does not offer a moral principle. It certainly violates everything we know about time and space, but a god who didn’t could scarcely be supposed to exist.

46

Z 10.26.12 at 9:34 am

The Cavlinist example is a standard Newcomb example used in decision theory texts such as Resnik’s book (widely used by philosophers, at any rate), Choices. I don’t know if he cites anyone for the example, but his book is from the 1987, and the example is on p. 111. So it predates Norkas by quite a bit.

Following up, Jean-Pierre Dupuy (currently at Stanford, I guess) made more or less his entire career arguing that in fact the parallelism you point out is a direct causality: he argues that the doctrine of Protestant predestination, and the Newcomb-like reversal of causality it allows, was directly instrumental in the formulation of the doctrine of liberal rational choice by Scottish enlightenment philosophers. His thesis has a great deal of empirical support, I think (note for instance that the one influential liberal French-speaking philosopher, Benjamin Constant, grew up in Switzerland in a Huguenot family). His most relevant work would probably be Le sacrifice et l’envie, whose title already hints at the ensuing argument (what is the link between rational optimization, especially delayed gratification, and religious sentiment?).

47

Katherine 10.26.12 at 9:39 am

OK, more seriously, it depends on how much you need that $1000. If you have a secure job, an inheritance, some wealth, the answer is clear.

Yup, quite. And yet another example how already being rich enables you to get richer just by being rich in the first place. A bit like being able to get cheaper gas and electricity because you have the credit rating to be able to pay by direct debit.

48

Z 10.26.12 at 9:49 am

BTW, do you have full access to Norkas’ article? If he doesn’t quote Dupuy extensively, this looks like a massive failure of scholarship at best.

49

Chris Bertram 10.26.12 at 10:09 am

Z: unfortunately not: only the first couple of pages seem to be accessible.

50

chris 10.26.12 at 12:32 pm

chris: It doesn’t violate causality, it probably violates (libertarian) free will, but not causality. Your actions are caused by your psychological properties, the prediction is caused by your psychological properties, so all the causation is pointing forward.

If you assume no free will, the “paradox” is even less interesting: the being does what he is doomed to do, and I do what I am doomed to do. There is no game — in fact, games in general are impossible. The only paradox is that this kind of strong determinism violates the intuition that people do make choices about their actions.

Therefore, for Newcomb’s paradox to be at all interesting, you have to assume that some form of free will is real. I thought that went without saying.

Under that assumption, my psychological properties can’t be absolute, or I don’t even have a choice about which box(es) to take in the first place. (The theological version of this is even worse: people who do evil literally can’t choose otherwise, but get punished for it anyway, and we are supposed to call this justice? At least Newcomb’s being isn’t threatening to torture anyone for eternity.)

51

chris 10.26.12 at 12:35 pm

Whether or not you have free will and can do what you want, God is waiting at the end of time and already knows what you will do. He already knows how everything will happen. Your future is in His past.

If He was just waiting, He couldn’t set up Newcomb’s paradox. He has to walk back from the end of time and fill boxes that exist *in* time based on what you *will* do at a *later* time. That’s a violation of causality.

Now, if God exists at all, the idea that he can violate causality isn’t all that shocking, compared to all the other things people say about Him. But that just goes back to my previous summation: if you think the being posing the paradox is literally supernatural and not bound by the ordinary laws of reality, you should take only box A; if you think he’s just a really good guesser/psychologist/etc., you should take both boxes (because even if he has guessed you will do that, it won’t do you any good to surprise him).

52

ajay 10.26.12 at 12:57 pm

The paradox, since it relies on something that’s a logical impossibility, is rather uninteresting. It’s a bit like the Shape-Sorting Puzzle. Say you have the job of sorting different-shaped pieces of card. They all get put into different bins depending on how many sides they have. So triangles go into bin 3, squares and rectangles and parallelograms go into bin 4 and so on. But what if you get a shape that is a triangle that has four sides? Aha! Paradox!

53

Fu Ko 10.26.12 at 1:12 pm

Just make sure to open A before you touch B.

54

Fu Ko 10.26.12 at 1:26 pm

ajay, the premise isn’t impossible here. Suppose there are two deterministic computer programs, one of which accesses the first value in an array and then the second value; the other of which accesses only the first value. The array itself is provided by the operating system. Whenever one of these programs runs, the operating system analyzes the program, and assigns the values to the array based on what the program will access.

Here are the programs:
void main(int argc, char **argv) { printf(“%d, %d\n”, argv[1], argv[2]); }
void main(int argc, char **argv) { printf(“%d!\n”, argv[1]); }

Running the first program prints “0, 1000”. Running the second program prints “1000000!”.

No logical impossibility there. Although one might argue that such an OS defies reasonable programmer expectations.

55

rea 10.26.12 at 1:27 pm

For the Calvinist, it’s not that you can get saved by behaving well or damned if you behave badly. Rather, it’s that your behavior gives you a clue as to whether you’re saved or damned. If you behave well, it is possible that you are among the saved. If you behave badly, you are among the damned. It’s the saved/damned status that causes the behavior, not the other way around, but a decision to behave badly is an admission that you are damned, because a saved person would not make that decision.

56

mpowell 10.26.12 at 3:34 pm

On the Newcomb problem, what is the deal with the two-boxers? It seems like faced with a situation where very likely we don’t know exactly what is going on, they appear to insist that we do know what is going on and that this powerful being is just getting very lucky. The actual stated defense of the two-box position appears to consist of the whine that the powerful being is rewarding irrationality. The problem with that whine is that it implies that the being is able to violate causality in the way we normally understand it, but if that’s true, it is no longer irrational to one-box. The two-box position is just to confuse the general concept of rationality with a specific concept of rationality within a universe that follows a specific set of rules that we assume our universe follows until informed otherwise. Well, this game provides evidence otherwise.

There are several mechanisms by which the being could be performing it’s tricks. Does it go back in time, is our mind state fully simulatable? Or perhaps it just has the ability to transport $1M bucks into the box in the instant before we select. Does it matter? The only question here is how we weight the probability that the being is simply very lucky against our priors that this being has some power that we can’t explain. This depends on some priors about your worldview and detailed observation of what exactly is going on in this game. But a person should probably be more willing to believe that they don’t have a full picture of physics (or that they can be fooled) than the likelihood that a person can correctly predict 100 truly random coin flips in a row. It’s kind of arrogant to assume otherwise.

57

ajay 10.26.12 at 4:10 pm

54: not quite the same – the OS is assigning the values after you have decided which program to run, and communicated your decision to the OS. You’re part of the system too – it’s not just the two programs and the OS.

58

Fu Ko 10.26.12 at 4:23 pm

mpowell, the being can’t be performing tricks. The premise states that the being knows what you will do before you even have to decide. By premise, it isn’t doing some kind of confidence game, swapping the boxes around. That might be the only realistic explanation of the described behavior, but this isn’t an observed phenomenon; this is a hypothetical.

This is really a ruinous premise in any strategic situation. Imagine you’re playing poker against a Newcomb being that always magically knows what you’re going to bet, in every round, before you even look at your cards. (And he’s playing to win.) Suppose the Newcomb being raises. How do you decide whether to fold? He wouldn’t have raised if your decision wasn’t wrong. So if you fold, you’re being bluffed out; wrong move. But if you call or raise, you’ve fallen into his trap; wrong move. By premise, all your moves are wrong. How can you decide what to do?

59

Fu Ko 10.26.12 at 4:25 pm

ajay, the two programs run by the OS are supposed to correspond to two different people put into the Newcomb situation. It’s unspecified how people (or which people) get into that situation.

60

mpowell 10.26.12 at 4:49 pm


mpowell, the being can’t be performing tricks. The premise states that the being knows what you will do before you even have to decide. By premise, it isn’t doing some kind of confidence game, swapping the boxes around. That might be the only realistic explanation of the described behavior, but this isn’t an observed phenomenon; this is a hypothetical.

Sorry, that’s just invalid. If we are talking about game theory a hypothetical is a set of observations an actor has regarding the game. That can be a claimed description of the rules and the actor’s observation of what has happened. It’s common for game theorists to ignore this restriction, but frankly, that’s crap and it leads to fallacious and confusing results. The only way to evaluate how I should play a game is based on things I can observe and verify.

61

bianca steele 10.26.12 at 4:49 pm

@mpowell: It seems like faced with a situation where very likely we don’t know exactly what is going on, they appear to insist that we do know what is going on

The problem definition says what’s going on. Is it legit to question whether the person who’s posing the problem knows what’s going on, or to say “I don’t see how the problem could exist in the real world, I don’t know what’s going on, so for me the problem is about an undefined situation” (those are the same thing, I guess, unless you think you know what’s going on and the person posing the problem doesn’t)?

In some versions of the problem, it seems, the being isn’t omniscient. Those are different. (So is the version in the OP, where no matter how consistent you are, you have no way of knowing in advance that the being’s prediction will match what you predicated yourself.) I suppose you could change the problem around until you came up with something that matched a real-life situation.

62

mpowell 10.26.12 at 5:59 pm

bianca@61:

As I mentioned in my follow up, the purpose of game theory is to figure out what you should do in a given situation. There terms of that situation can only be described by the things a person in the situation can observe or verify. Therfore, any hypothetical which insists such things as “this being is omniscient” or “nobody is playing tricks on you” are fundamentally ill posed hypotheticals. A hypothetical Newcomb problem cannot simply skirt this issue.

63

Matt McIrvin 10.26.12 at 6:04 pm

@Prosthetic Conscience: I’d read some stuff from Less Wrong and some people making fun of them before, but that Roko’s Basilisk business really floors me. You’d think that would be the kind of episode that would make any hyper-rational person step back and wonder if they were hip-deep in complete nonsense.

64

bianca steele 10.26.12 at 7:27 pm

@mpowell
If you think it makes sense, when answering a problem in game theory, to think about what real-world situation from your own actual experience the problem probably corresponds to, and change the payoffs in your head to what you think corresponds to the real-world situation–which is what it sounds like you’re doing–then you’re either doing a different kind of math than I’ve ever heard of (one where there’s one perfect version of every problem and the person giving the problem might trick you about its details?), or you’re not doing math. (How many situations, outside of gambling or economics, are we ever in where we know the payoff matrix? If that’s close to 0, (under your assumptions) game theory would have to be a waste of time.) IMHO, of course.

I guess the definition of a game could be derived from real-world experience, but that doesn’t mean when you decide whether a result is correct you first list all the really possible real world outcomes.

65

mdc 10.26.12 at 7:42 pm

BTW, “predestination” wasn’t invented by Calvin. It’s a puzzle for any Christian, I’d think. Don’t Catholics, eg, affirm predestination? See Summa Theol. 1st part, Q. 23, Art 1.

66

Harold 10.26.12 at 8:25 pm

mdc@64, It has always been a problem, and became acute after the discovery of America, with its vast population who had never heard of Christ. The Jesuit scholar A.H.T. Levi, who wrote about Erasmus, quotes Pico della Mirandola, who has God say to mankind: “Confined within no bounds, you shall fix the limits of your own nature according to the free choice in whose power I have placed you. We have made you neither mortal nor immortal, so that with freedom and honour you should be your own sculptor and maker, to fashion your form as you choose. You can fall away into the lower natures which are the animals. You can be reborn by the decision of your soul into the higher natures which are divine.” — Pico della Mirandola

According to Levi, “In the evangelical humanism that Erasmus inherited through [the French Humanist] Colet from Pico, not only was man’s perfection intrinsic to his moral achievement but, outside a formal theological context and the difficulties about Pelagianism [no need for God’s grace] it imposed, moral self determination was clearly put into man’s autonomous power. Erasmus never ceased to hold this view, and it explains his final rejection of Luther.” http://www.ourcivilisation.com/smartboard/shop/erasmus/intro/intro2.htm

Levi writes that “Both humanists and reformers wished to reject the Pelagianism of the scholastics and the deleterious religious extrinsicism [salvation by works] which it promoted. But while the humanists as such were dedicated to defending man’s intrinsic perfectibility in accordance with his self-determining moral choices, the reformers could find no logical answer to Pelagianism, short of denying to free will any power in the order of grace.

Levi goes on to say, “It is not difficult to see how this situation arose. If man’s `nature’ is capable even of accepting, to say nothing of meriting, grace, the result is at least semi-Pelagian theology and a religion of tension. If, however, it is not, man is necessarily deprived of any power of self-determination to a good which, on any theory, is supernatural, and he is incapable of influencing his own eternal fate. The dilemma is rigid. Erasmus’s treatise against Luther, the de libero arbitrio (On Free Will, 1524), accuses Luther of denying free will. Luther’s reply, the de servo arbitrio (On Unfree Will, 1525), accuses Erasmus unjustly of scepticism, but also of Pelagianism.”

67

Justin 10.26.12 at 9:15 pm

John Martin Fischer spends a large part of a chapter in _The Metaphysics of Free Will_ (1994) discussing Calvinism and Newcomb’s problem (if I remember correctly, he recommends one boxing if God is essentially infallible but two boxing if not).

68

Jameson Quinn 10.26.12 at 11:56 pm

About Roko’s Basilisk:

For those who don’t follow Less Wrong, this is something to do with the idea that a future super-powerful artificial intelligence, akin to Newcombe’s Omega, could make a simulated copy of you and do things to it in order to reward/punish the real you, and that you should care about this in some ways but not think too hard about it in other ways. I once promised not to reveal much more about it so that someone would explain it to me. I’m keeping that promise, but I can say that the full explanation is more bizarrely logical than you might think from that crazy intro. It’s almost convincing to me, on a logical level, even though on a common-sense level it has no appeal whatsoever.

Still, I reject the idea that it’s important. I’m only keeping the secret of Roko’s basilisk because I promised to, not because I think it’s worth keeping. My reason for rejecting it, though, is perhaps even crazier-sounding than the (vague outlines of) the idea itself. Basically, I (like most people on Less Wrong) accept the idea of quantum-mechanical “multiple universes”, superimposed in the universal wave function like the live and dead versions of Shrodinger’s cat. Thus it’s clear to me (unlike to those who fear Roko’s basilisk) that even if a hyper-powerful AI, able to simulate and predict (with high but not perfect certainty) human actions, will someday exist (an idea which I find unlikely but possible), it could not simulate all the myriad quantum branches, and thus my experience of being inside that simulation would be tiny compared to my experience in all the quantum branches of reality, so I should not worry about what happens in those simulations.

Sorry for all the parenthetical digressions, but otherwise it would take many paragraphs to say all that.

69

Jameson Quinn 10.26.12 at 11:59 pm

So as to stepping back and wondering if you’re hip deep in complete nonsense: I think Less Wrong often is, but I think there may well be islands in that nonsense swamp that are worth visiting.

70

John Quiggin 10.27.12 at 1:22 am

I haven’t actually met such a being yet, so, if I attached a 0.1 per cent probability that I ever would, it would be sensible for me to cultivate the disposition favored by such beings. That way, the being would correctly predict that I would take one box, I would take the box and get the million. Having thought about it that way, I conclude that
the probability of such a being is much smaller than 0.1 per cent, and the usefulness of the general principle “other things equal, more money is better than less”, so great, that I’m not going to cultivate the disposition. Should I meet the being, I’ll try to take the $1000 with as a good a grace as I can muster, and write a long blog post explaining that ex ante, I was still right.

71

John Quiggin 10.27.12 at 1:27 am

On the other hand (and especially knowing that Nozick was mixed up in this somehow), my actual reasoning would be (as usual with this kind of problem) to deny the premise. The being may look impressive, but it can’t actually violate causality and I doubt that it can predict perfectly, so its record of always getting predictions right probably means that it can pull a trick to punish anyone who picks both boxes and to fix the predictions after the fact. On that reasoning, I should pick one box. The being will then produce a (possibly retrospectively adjusted) one-box prediction and give me the million.

72

Christiaan 10.27.12 at 11:34 am

Your description of Newcomb’s problem is logically inconsistent, especially in the phrases “always guesses right” and “choice”. Either he always knows (i.e. is perfectly prescient), in which case he does not guess and you don’t have a choice, or he guesses with some chance of being wrong, in which case he your choice should depend on your assessment as to what the being should be thinking about you. Therefore at least one part of the description of the rules must be wrong. Once you recognize this, it can be anything: either the being is not always right and you can cheat it, or perhaps the being can change the content of box A after your choice, or perhaps some other rule is not as it is described. So really the answer should depend on your assessment of the uncertainty in what the actual rules of the game are.

In a sense this is also true for protestantism. A lot of discussion has been going on as to what the exact rules are. A common believe is that you can loose your chance in heaven based on your life, basically saying that the being can empty box A. There is also a discussion on whether people are born sinners or innocents, which also basically implies that the content of box A is determined after you made your choice. And then there’s the inconsistency of saying that God is omni prescient, while acknowledging that you have a choice. Overall I do think that the protestant ethic is in the end based on the premise that you don’t really know how God decides (what the rules of the game are), so you better not take a chance of making the wrong choice.

73

Moni Talukdar 10.28.12 at 1:46 am

I’m afraid that though this whole discussion is very interesting the enlisting of Weber into the debate is quite unnecessary, for this discussion completely misses the one question that was in fact the key issue for Weber. That question is not one of logic but rather of historical contingency: Why work and profit accumulation and why not monastic self-denial? Fine, one can set up predestination as a logical problem and try to make sense of the belief and associated behavior in terms of rational choice. But what interested Weber was that ascetic self-denial took this unprecedented new form, so unlike the practices of “world rejection” that had previously counted as good behavior. This transformation was crucial for what followed in its wake. Remember, it’s Protestant Ethic and the Spirit of Capitalism

74

mpowell 10.28.12 at 6:28 pm

bianca @ 64:

What I am pointing out is that the Newcomb problem only emerges as a paradox when you mix the two domains of perfect knowledge and actual player experience. If you want to specify that there is a being that can correctly perfectly predict your actions, then it doesn’t even make sense to ask the question of what you ought to do facing a Newcomb problem.

But if you want to ask, “what should I do facing a Newcomb problem?”, then you need to actually specify what is known by the person making the decision. This is perfectly consistent with the way game theory math works in that players must make decisions under uncertainty. To make the math easy, it is common to specify that the probability of various different possible outcomes is known precisely, but if you want to consider a game where the probability of various parameters of the game being a certain way are unknown, then you had better acknowledge that in your math.

I agree with Christiaan@72, except that the way I prefer to describe this is that the problem is ‘ill-posed’ instead of illogical. Once you specify the problem correctly, the paradox no longer appears.

75

bianca steele 10.28.12 at 6:59 pm

What I am pointing out is that the Newcomb problem only emerges as a paradox when you mix the two domains of perfect knowledge and actual player experience.

Fair enough. (As posed, though, is the problem posed as a problem in game theory? I don’t have time to look this up, but I’m not going to take either Wikipedia or Less Wrong as my final authority.)

I suppose there’s a line between paradoxes on the one hand, and puzzles on the other. It seems to be an interesting puzzle even to people who aren’t strictly speaking using game theory.

If you want to specify that there is a being that can correctly perfectly predict your actions, then it doesn’t even make sense to ask the question of what you ought to do facing a Newcomb problem.

No problem, as long as you’re not proposing a puzzle like this: “To answer this puzzle, you must assume an omniscient being. But there is no such thing as an omniscient being. And I might be lying. And the only way to answer this question is using game theory. Therefore, the right answer to this question is to tell me that I’m wrong and that the game theory methods you learned last year are the only possible ones. Otherwise, if you assume I want you to take my assumptions as valid, for the sake of argument, I will take you to be superstitious and inclined to think God is going to save you from liars and situations that are too difficult for you: not rational enough for game theory to be worth the effort.” :)

If you’re proposing that academics who’ve assumed the assumption has to be taken as valid, for the sake of argument, have misunderstood the real nature of the problem–or even that the problem as originally posed is ill-posed, and is now better understood by understanding that the game-theoretical understanding of the problem is the true description of the problem that puzzled the original posers so much that they couldn’t even formulate it properly–I won’t object here, though others might want to.

76

Sam the Centipede 10.29.12 at 1:07 pm

Am I unique in having little patience with this sort of “problem” or “paradox”?

The argument seems to be, in a nutshell:
If we assume the impossible, then nonsense can occur.

While counter-factual arguments are useful, arguments based on impossibilities are not. It’s a general mathematical and logical result: if rubbish then anything.

As for the questions of God’s nature and Calvinism, I find those unconvincing for two reasons:
(1) a god that relies on philosophical or logical argument to be brought into existence (such as the ontological argument) is rather useless and can be ignored, certainly it is not worthy of worship;
(2) a god that is constrained by humans’ philosophical or logical arguments about its capabilities is also rather useless and can be ignored.

If a super-being has any interest in me understanding its nature, I expect it to communicate in flames stretching across the the sky announced by crashing thunderbolts, not from scholarly journals, random environmental occurrences (such as earthquakes or hurricanes), doorstep evangelists, Tom Cruise or David Icke.

As for the Newcomb argument, if I were forced into a response, and maybe I had missed someone suggesting it, but I’d consider tossing a coin. Or is that too Captain Kirkish?

77

ajay 10.29.12 at 2:44 pm

If you want to specify that there is a being that can correctly perfectly predict your actions, then it doesn’t even make sense to ask the question of what you ought to do facing a Newcomb problem.

Quite – any more than it makes sense to ask “what output should one of the programs in 54 choose to give?”

78

Jeffrey Davis 10.29.12 at 2:51 pm

That sounds like Antonioni’s Problem: what should I do when nothing I do makes any difference. Which is a subset of Becket’s Problem: what I should I do when nothing I do makes any difference even though I’m almost as smart as God and funnier.

79

ajay 10.29.12 at 3:03 pm

I thought Becket’s problem was turbulence? (Now treatable with a simple antacid.)

80

Jeffrey Davis 10.29.12 at 3:31 pm

re: 78

I think you’re confusing that with Bright’s Disease.

81

Jeffrey Davis 10.29.12 at 3:31 pm

re: 78

I think you’re confusing that with Bright’s Disease.

82

LFC 10.29.12 at 4:01 pm

@77, 78
I thought Becket’s problem was Beckett’s problem.

83

ajay 10.29.12 at 4:22 pm

84

Bloix 10.29.12 at 5:25 pm

Senate candidate Richard Mourdock:

“I struggled with it myself for a long time, but I came to realize life is that gift from God, and I think even when life begins in that horrible situation of rape, that it is something that God intended to happen.”

This is what Christians believe, okay? You have no control over your life. God wants you pregnant, he’s gonna get you pregnant. He wants you to pick a box with $1 million in it, you’ll pick the box. So stop trying to game the system.

85

LFC 10.29.12 at 6:16 pm

ajay:
i’m slow today, but i just now got yr little joke (“turbulent priest”) without having to resort to wikipedia.
but i’m fairly sure (though not absolutely positive) that Jeffrey Davis meant Samuel Beckett.

86

LFC 10.29.12 at 6:18 pm

And if this were Atrios’s blog, i cd add irrelevantly:
the electricity is still on here!
(though i don’t expect that to be the case v much longer)

87

Jeffrey Davis 10.29.12 at 7:00 pm

re: 84

Yes. I’d never noticed the difference in spelling. And I haven’t even checked to make sure you all have it right. That’s “Davis’s Problem.” The first person (call him “A”) to correct me is probably right.

I’m widely known as the first person on the internet to have admitted to being wrong. But don’t quote me on that. There’s a comfort in knowing that one is more wrong than right.

88

Afu 10.30.12 at 7:34 am

68 “I once promised not to reveal much more about it so that someone would explain it to me.”

Why in rationality’s name would you not be able to explain it fully? Less Wrong is such a strange place.

Comments on this entry are closed.