As commenters and my last post, and others, have pointed out, there’s a logical gap in my argument that, given imperfect knowledge and the recognition that we tend to overestimate our own capabilities, we should adopt a rule-based version of consequentialism which would include rules against pre-emptive or preventive wars[1]. The problem of imperfect knowledge also applies to the consequences of deciding not to start a pre-emptive war. As I’ll argue though, the symmetry is only apparent and the case for caution is strong.
I’ve addressed the underlying issue at length in a
recent paper (large PDF) trying to make sense of the well-known “precautionary principle” used in relation to poorly-understood environmental risks. I look at a range of decision procedures, from the simplest (guess what is most likely to happen and assume that will happen) through the simplistic treatment of uncertainty given by expected value models to more general approaches incorporating the recognition that, in most real world problems we will not know the probabilities of the possible outcomes and will not even have considered all the possible outcomes. Moreover, because surprises are generally unpleasant, the things that are omitted are likely to produce an overoptimistic evaluation. This leads me to state a general incompleteness hypothesis, namely
Estimates of project outcomes derived from formal models of choice under uncertainty are inherently incomplete. Incomplete estimates will generally be over-optimistic. The errors will be greater, the less well-understood is the problem in question.
The last sentence is crucial. In the context of an argument for pre-emptive war, the relevant alternative is “wait and see”. Whereas the consequences of going to war are highly unpredictable, the consequences of wait and see, over a period of, say, a few months, aren’t hard to describe. Either the putative threat will get worse, or it will fade. The cost of the wait and see approach is the possibility of having too fight later, with a less favorable balance of forces than could have been had with the pre-emptive strike. But advocates of the pre-emptive strike tend to overstate these costs and underestimate the uncertainty surrounding their preferred option.
Iraq provides a good illustration. At the time Bush and Blair decided on war, the alternative was to wait for Blix’s inspections to be completed. The reasons given for going to war in March 2003 rather than waiting until later seem absurdly trivial in retrospect. It was argued that the invasion couldn’t take place in summer and that waiting until after summer would keep forces tied up too long on standby in Kuwait. As things turned out, I’m sure Coalition forces would have far preferred summer in Kuwait to summer in Baghdad.
If Bush and Blair were actually concerned about the threat posed by Saddam, the decision to go to war in March, rather than waiting looks entirely unreasonable, except on the assumption that nothing could possibly go wrong[2]. Tim Dunlop has more on this, with specific reference to Rumsfeld’s latest observation that “you go to war with the army you have”.
As this example shows, the precautionary principle is not, as it might seem, symmetrical. In a situation where the consequences of one option are poorly understood, it provides grounds for avoiding, or, if possible, deferring a decision to choose that option even when a naive analysis would suggest that it should produce a better outcome. War is the paradigmatic example of an activity where “the race is not always to the swift, nor the battle to the strong, but time and chance happens to them all”. All of this leads to something close to the Powell doctrine. If war is to be an instrument of policy, it should only be used under conditions of overwhelming superiority in all phases (including occupation), for clear and feasible objectives[3], and with a clearly formulated exit strategy.
fn1. Except where the threat is so clear and imminent that standard self-defence arguments can be invoked.
fn2. An alternative, plausible in the light of the very lackadaisical attitude to weapons exhibited after the invasion is that they knew the WMD case was bogus, and needed to start the war before it collapsed altogether.
fn3. These objectives need to be justifiable in terms of the interests of the people of the world in general, and not of the national interest of one country or the personal interests of its rulers. A nation or group that pursues self-interest through military force is an enemy to all and will ultimately attract retaliation.
{ 1 trackback }
{ 75 comments }
ogmb 12.09.04 at 7:54 am
Bush/Blair/Aznar decided to go to war after the Azores summit because Blix telegraphed from Baghdad that all the U.S. intelligence he had received (after prolonged stalling by the U.S.) was shite. So they could not expect any more good news from Baghdad, and waiting longer would’ve meant slipping support and eventually diplomatic defeat. The decision to wage war in March rather than later was simply made because the main actors saw their window of opportunity closing.
Chris Bertram 12.09.04 at 8:03 am
I guess I’m still a bit puzzled by the role consequentialism is playing in all this. We need a clear set of principles to govern the use of force in the international order, and we have such a set of principles in traditional just-war doctrine, principles which we could extend or adapt to cover humanitiarian intervention cases.
Given your invocation of the principle of self-defence in fn.1, I guess that your set of principles is pretty much the same as those in this just war doctrine plus.
Now if I were to tell a justificatory story about those same principles, it would proceed differently, by way of a story about individual rights (including the right to self-defence). But I doubt that the content of the principles would differ much — if at all — from yours.
(Maybe it is a strength that we can tell different justificatory stories, since it is important that any set of rules be mutually acceptable, and the chance of universal acceptance is improved if we don’t all have to buy into the same underlying justification — a case of Rawlsian overlapping consensus.)
John Quiggin 12.09.04 at 8:28 am
Chris, I agree with what you say. We could come to much the same position from different justificatory principles. But I tend to think that’s because we’ve learned to rule out justificatory principles that regularly lead to bad consequences. However, that may be a debate for another day.
will ambrosini 12.09.04 at 9:47 am
Thanks for the paper. For this budding economics student, the discussion of more and more general models of uncertainty was enlightning. I think I’ll print it out and use it as a cheat sheet.
Your analysis is VERY dependent on how the decesion maker determines which state of the world is “status quo” and what would be an “innovation”. Where you see preemptive war in Iraq as an uncertain innovation, Bush sees uncertainty in the status quo, i.e. an unknown (and unknowable?) connection between Saddam’s WMD and terrorist willing to use them. This would be analoguous to the “innovation” that you observe in the doubling of green house gas emmisions if we maintain the status quo.
Similarly, there is some fudgy-ness in the statement of the incompletness hypothesis. “Incomplete estimates will generally be over-optimistic.” Your over-optimism may be my over-pessimism.
I’m reminded of Rumsfeld’s famous line about unknown unknowables and some such. What rule of nature implies only bad stuff is more unknowable?
Potemkin Cruise 12.09.04 at 9:50 am
Not sure of the extent of congruence between just war doctrine and a precautionary principle based on consequentialist analysis, but it would seem that one thing the latter has going for it is an arguably greater persuasive impact. By pointing out exactly how, in the past, over-optimism in the face of uncertainty has led to unintended negative consequences (body bags, dead babies, etc) it becomes a hell of a lot easier to convince would be warriors to step back and take a deep breath. The fact that people like Powell who have actually been to war advocate something similar only adds to the persuasiveness.
In the US, at least, opposition to preemptive war based solely on moral notions of pacifism or through citation to rule-based doctrines doesn’t seem to be that effective. Such arguments inevitably invite red state moral counter-arguments, and the whole discourse tends to break down into hair-pulling. On the other hand, pointing out that cousin Elroy may well come home in a box because the men in suits guessed wrong (just like they did the last time) tends to focus the mind in unique way. Not saying anybody should abandon moral arguments or forego appeals to international rules of war, just that so long as MLA membership lags behind NRA membership, I’d be loathe to abandon consequentialist reasoning as a persuasive tool.
Kevin Donoghue 12.09.04 at 9:59 am
Looking at your last sentence and the accompanying footnote it seems to me that you are departing quite radically from just-war doctrine. Suppose the “true” reason for the invasion was to place Iraq under a leader who would be better for Iraq and for the world in general. Also, suppose the operation had been properly planned and executed. Then you have a war which is justifiable on consequentialist grounds but not a just war in the traditional sense.
bad Jim 12.09.04 at 10:06 am
The first and last are entirely unproblematic, even commonplace. The middle assertion is entirely debatable. It’s as often argued that the default management attitude is pessimistic.
dsquared 12.09.04 at 10:56 am
“Optimism” in John’s paper is being used in a sense of “cognitive bias toward overestimating the benefits and underestimating the costs of a course of action which is desired”, rather than a general sunny-side-upness.
John: have you seen this paper in Metroeconomica, on evidence theory which looks like a kinda-sorta similar approach?
Andrew Boucher 12.09.04 at 12:39 pm
I’d agree with Will that the principle enunciated seems to apply as well against models of global warming. There are high costs of going to war, and there are high costs of doing something about global warming. Why in one case is it advocated that we do nothing, but in the other case we do something?
dsquared 12.09.04 at 1:22 pm
Basically, it’s an argument about reversibility of courses of action. If we decide to cut carbon emissions and it turns out to be unnecessary, then we can just start burning coal again; we’ll have lost a couple of decades of economic growth in the worst case (which I frankly don’t believe) but no more.
On the other hand, if we make the opposite mistake and London sinks beneath the waves, then it’s pretty much gone and there’s no bringing it back.
Similarly, having not started a war, it’s easy to move to a state of affairs where you have started one, but having started a war, you can’t unstart it.
Andrew Boucher 12.09.04 at 2:03 pm
dsquared: You seem to be using different bars in the two cases. In terms of global warming it’s okay if we can go back to start – never mind the two decades of lost economic growth. In the case of war it’s not status quo ante you are asking but not committing an act which has already been commited – pretty difficult.
Basically, if the U.S. puts Saddam Hussein back in power, doesn’t that pretty much undo the Iraqi war, in the same way, after spending trillions of dollars to avert global warming, one can say “never mind” and go back to burning carbon fuels ?
Deb Frisch 12.09.04 at 2:11 pm
It is curious that most people who endorse the precautionary principle were and are vehemently opposed to Bush War II.
The PP is an alternative to American-style “cost-benefit analysis.” CBA is a method of decision analysis that converts all uncertainties to probabilities and all consequences to money. The best decision is the one that maximizes expected value (net present value? expected utility?).
The PP says that in situations where the stakes are very high and there is a lot of ambiguity (second-order uncertainty about the magnitudes of the consequences and probabilities) you should give more weight to the worst case scenario and have a bias toward inaction.
The PP is to CBA as Allais and Ellsberg are to Subjective Expected Utility theory (von Neumann & Morgenstern, Savage).
Although Allais and Ellsberg argued that their counterexamples to SEU challenged its normative validity as well as its descriptive accuracy, almost all economists still accept SEU as a normative standard.
The PP rejects the normative status of SEU in a way that is very similar to how Allais and Ellsberg did.
Two concepts from behavioral economics can clarify the PP. PP says that in certain situations, a bias toward inaction (status quo bias) is a good thing. And a bias toward giving “more weight” to negative deviations from the status quo than to positive ones (loss aversion) is a good thing.
The PP says that sometimes, it’s rational to be biased.
Dan Hardie 12.09.04 at 2:28 pm
Possibly I’m missing something (if so, let me know), but your original post contained the following:
‘Since war is a negative sum game, rational decision makers do not fight wars ‘.
You didn’t say ‘rational decision makers do not fight preventive (or, as we Englishmen say, preventative) wars’ – just ‘rational decision makers do not fight wars’ without qualification.
Did you mean to qualify the word ‘war’? Did you mean to use the verb ‘start’ or ‘provoke’ rather than ‘fight’? Or do you really mean that it is never, under any circumstances, a rational decision to fight a war?
As Blainey noted, an attack on a country needs only one party to make a decision- the attacker. A war needs at least two parties to make a decision: the attacker decides to attack, and the attacked party decides to fight back. This isn’t some cute piece of theory, it happens: Denmark in 1940 chose not to make even a token resistance to the Nazis; Norway, beginning on the very same day, made a fierce fight although the odds in her favour were very poor indeed. (This is not to disparage the Danes, who with the Bulgarians had Europe’s most honourable record in saving Jews from murder.) Or consider the Hungarians in 1956 (tens of thousands of dead Hungarians, thousands of dead Soviet troops) as opposed to the Czechs in 1968- no actual fighting, unless you include some pretty bloodthirsty ice hockey internationals. The Indian reconquest of Goa, met by Portuguese passivity, as opposed to the British reaction to the invasion of the Falklands; the Kosovo war as opposed to the Macedonian intervention- examples could be multiplied.
Deb Frisch 12.09.04 at 2:51 pm
When is war a rational policy decision?
JQ: If war is to be an instrument of policy, it should only be used under conditions of overwhelming superiority in all phases (including occupation), for clear and feasible objectives and with a clearly formulated exit strategy (except where the threat is so clear and imminent that standard self-defence arguments can be invoked).
These objectives need to be justifiable in terms of the interests of the people of the world in general, and not of the national interest of one country or the personal interests of its rulers. A nation or group that pursues self-interest through military force is an enemy to all and will ultimately attract retaliation.
—
The United States’ attack on Iraq in March, 2003 fails on at least three counts.
1.Extremely vague plan for occupation and no exit plan.
2.Gross neglect of well-being of occupied people. The US attack on Baghdad damaged critical infrastructure (electricity, water). The US cleanup crew (Cheneyburton) repaired damage to the oil facilities before it repaired damage to the infrastructure of Baghdad.
The failure to attempt to keep track of civilian casualties also might be viewed as “failure to give sufficient weight to the objectives of the occupied people.”
3. If one of the objectives is to deter a non-imminent attack, the evidence of opponent’s intent and capacity to attack should be very strong.
There is no evidence that Iraq’s intention and cabability of attacking the United States were very high.
The logic went something like this:
Saddam Hussein will not allow U.N. weapons inspectors in. Therefore, SH is hiding WMDs.
We know now that SH had another reason for wanting to keep UN inspectors away – he was embezzling money from the UN food-for-oil program. The UN knew that, of course. So did Uncle Sam. But Sam pretended that Saddam’s refusal to allow UN inspectors was more diagnostic of having weapons than it really was. And Americans bought it. (Americans will buy anything.)
The “lackadaisical attitude to weapons exhibited after the invasion†is further evidence for the hypothesis that “they knew the WMD case was bogus, and needed to start the war before it collapsed altogether.â€
I think Uncle Scam gets an F, according to Quiggin’s theory of when war is a rational policy decision.
LizardBreath 12.09.04 at 3:16 pm
Possibly I’m missing something (if so, let me know), but your original post contained the following:
‘Since war is a negative sum game, rational decision makers do not fight wars ‘.
You didn’t say ‘rational decision makers do not fight preventive (or, as we Englishmen say, preventative) wars’ – just ‘rational decision makers do not fight wars’ without qualification.
I believe that the sentence in question should be read as “rational decision makers do not fight wars with other rational decision makers.” Once at least one party is making irrational decisions, the assertion does not apply.
Deb Frisch 12.09.04 at 3:35 pm
John@AUS: Since war is a negative sum game, rational decision makers do not fight wars.
Lizard@ENG:rational decision makers do not fight wars with other rational decision makers.
Deb@USA: I think Liz’ hypothesis is much more plausible than John’s. But I think they are both false. If the carrying capacity of the earth were greatly decreased (due to global war, global warming, global capitalism, etc.) and the citizens of one nation could not survive without the resources of another, it might be rational for both sides to participate in a war. The US asks Canada to help us out with food and water. Canada says no. So US attacks Canada.
It might be rational, if Americans weigh the well-being of Americans more than the well-being of Canadians.
Would it be irrational for Canada to fight back?
I’m 99% sure it’s possible to conduct a scenario where rational decision makers choose to engage in war.
Where’s Kevin McCabe when you need him?
Stephen M (Ethesis 12.09.04 at 3:35 pm
I used to work with simulations and games. One thing that struck me over and over and over again was how many military conflicts started with the parties completely off base as to reality.
in most real world problems we will not know the probabilities of the possible outcomes and will not even have considered all the possible outcomes. Moreover, because surprises are generally unpleasant, the things that are omitted are likely to produce an overoptimistic evaluation.
That statement is historically true.
Consider WWI for an easy example. The Germans and French ran out of bullets early on because both expected to have already won and to have significant reserves by the time they ran out.
Or WWII with the French predicting an easy and early victory.
For every Alexander or Frederick II there are 20 or 30 people who made the same assumptions about their ability to prevail.
Look at the four invasions of Canada for example (I’m including the one by the New York Irish). Everyone predicted success.
As for WMDs … you need to go back and listen to the Ms. Clinton NPR interview when she was pushing her book. At that point it is obvious that she expects W to find WMDs.
Another example of a surprise in the field of war, if you view politics that way.
But, back to the clear point, a good historical analysis would reflect that overall, bad things happen in war and that most people who start a war miscalculate due to bad analysis, consistent with the theory above.
Deb Frisch 12.09.04 at 3:49 pm
“One thing that struck me over and over and over again was how many military conflicts started with the parties completely off base as to reality.”
Many biases demonstrated by Kahneman, Tversky, Lichtenstein, Fischhoff, Slovic and other cognitive & decision psychologists apply to military decisions including overconfidence, self-serving bias, outgroup homogeneity and cognitive dissonance. [See Fog of War thread.]
When there’s this much garbage in, cost-benefit analysis gives garbage out. The precautionary principle attempts to counteract overconfidence and self-serving bias.
Skeptics of the PP rightly note that it endorses biased decision making, most notably, the status quo bias. Mistakes of omission are treated less harshly than mistakes of comission. In ambiguous situations where the decision makers are under stress, don’t have all the information, etc. it is rational to prefer to err on the side of caution.
dsquared 12.09.04 at 3:51 pm
Andrew: I don’t think I am if you think about the thermodynamics of the two cases. Burning carbon increases entropy, as does burning gunpowder.
Dan Hardie 12.09.04 at 4:28 pm
Stephen M:’One thing that struck me over and over and over again was how many military conflicts started with the parties completely off base as to reality.’
That’s another point made by the great Australian historian Geoffrey Blainey in ‘The Causes of War’. Blainey said wars are always fought by two parties who believe they can win, which means at least one of them was badly mistaken. I would qualify Blainey’s point slightly: just occasionally you do seem to get countries fighting despite knowing the odds against them to be almost certainly overwhelming (Greece 1940-1, Norway 1940). But yes: anyone who reads military history comes across this time and again- the utter unrealism of so many policymakers’ view of the world.
Geoffrey Blainey: excellent historian, very good prose style, said nasty things about immigrants before he died but still- every intelligent person should read ‘The Causes of War’.
I have to say, though, that I’d like to go through his archives to discover what his data set was when he asserted that increased trade between nations did not decrease the likelihood of war between them. I suspect that he was thinking of the onset of the First World War, which was indeed marked by an increase in international trade, despite the tariff barriers of all the Great Powers bar the UK- still, I’m a bit dubious about that one.
Thomas 12.09.04 at 4:35 pm
I think DD has misstated John Q’s argument–it isn’t about irreversibility, but uncertainty.
But that makes John Q’s position even odder. We’ve already seen that John Q believes that a case where rule utilitarianism gives the wrong result (the non-maximizing result) is an argument for rule utilitarianism, not against it.
Now, John Q is arguing against preventive war in the hypothetical scenario Posner outlined, in which the benefits outweight the costs (however measured), but the argument actually supports the US preventive war against Iraq, which even Posner suggests doesn’t meet the test ex post.
How does John Q’s argument support the US invasion? Simply adapt the offered argument regarding global warming to the US/Iraq position in early 2003. Allowing Saddam to continue whatever efforts he was making toward WMD was an innovation, as was the proposed US invasion. But which consequences were better understood? The consequences of invading another country (even the consequences of invading another country with insufficient forces) and occupying it are relatively well-understood. But the consequences of Saddam’s continued pursuit of WMD were not well understood at the time. So, John Q’s precautionary principle would have urged the US invasion.
As I said, this is a bit unexpected. An argument meant to demonstrate that even a hypothetical preventive war in which the benefits outweigh the costs shouldn’t be waged argues in favor of an actual war in which the costs outweigh the benefits.
Andrew Boucher 12.09.04 at 4:37 pm
d*d : I guess that was a joke, because o.w. it seems you agree that the two cases are analogous.
Andrew Boucher 12.09.04 at 4:46 pm
d*d : Sorry my mistake about the analogy. Still, not sure whether you’re joking or not, since thermodynamics surely not does apply.
dsquared 12.09.04 at 5:55 pm
I rather think it does, or at least something similar to it does. There does appear to me to be an important sense in which the flooding of London (if it happens) is an irreversible event whereas the decision to restrict carbon emissions is one that can be reversed. Or to put it crudely (and I’ve always been a big advocate of the options approach that John mentions in the paper), time is on your side when you postpone a war (as long as it is genuinely a war of choice), but not when you postpone doing something about global warming.
Russell L. Carter 12.09.04 at 6:07 pm
Thomas,
The US controlled the air above Iraq, as it did the outer space in which our sensing spy satellites operate, and using techniques available to astronomers analysis of the physical territory to a resolution of centimeters could be performed and “interesting” locations to explore could be identified and then handed over to Hans Blix. This was done and he repeatedly went and found nothing. I.e., there was no actual data to support the theory that Saddam was pursuing WMD. Logically, this strongly suggests that no significant WMD program related activities were being performed. However, the “logic” pursued by the proponents of war was to insist that absense of actual data implied that the threat actually existed, and was in fact growing. War commenced, a lot of people were killed, money was spent at prodigious rates, and policy options wrt to other WMD problem areas are constricted. None of this can be undone.
Similarly thorough efforts have produced vast quantities of data that through analysis have enabled characterization of the sources and produced a rough estimate of the rate of global warming. However there is no known method for predicting what the long term effects of this situation are. The point of curtailment of human produced greenhouse gases is to postpone the arrival of the unknown effects until perhaps more is known about how to deal with the inevitable impacts on human societies. Perhaps these effects will on balance be positive. Perhaps they may be negative. If they are positive, well we just resume burning coal which was in effect banked in the interim. The cost will be higher prices (prodigious amounts of money spent) for certain functions in the interim. It seems likely that this won’t be a disaster, and it may allow us time to adapt more effectively if any of the GW effects are in fact disastrous. Thus, if GW is found to be innocuous, then reversibility is possible with the exception of the money spent.
Now, maybe it isn’t wise to do this for various reasons that Mr. Unqualified Offerings could succinctly describe, but subscribing to those notions ought to commit you to being extremely dubious about the government’s claims about Iraq.
John Quiggin 12.09.04 at 7:18 pm
On irrationality, Lizardbreath is right. At least one party must be irrational. Now apply symmetry. The occurrence of a war implies that somebody is acting irrationally and if you’re the one starting the war, it’s probably you.
On DD’s point about irreversibility, this is part of my argument but not the whole. There are a number of other biases that point in the same direction, and the process of generalizing decision theory reveals more at each stage.
Deb & Dan, I meant to acknowledge Blainey and will do so when I get time to edit the post.
Thomas, your reasoning is why I supported the 2002 ultimatum to Saddam with which he substantially complied (please, no lawyer arguments about whether Saddam breached 1441). But in March 2003, the alternative to war was not leaving Saddam’s WMDs alone. It was continued intrusive inspections, including overflights, out-of-country interviews with scientists etc.
Sebastian Holsclaw 12.09.04 at 8:06 pm
“I rather think it does, or at least something similar to it does. There does appear to me to be an important sense in which the flooding of London (if it happens) is an irreversible event whereas the decision to restrict carbon emissions is one that can be reversed. Or to put it crudely (and I’ve always been a big advocate of the options approach that John mentions in the paper), time is on your side when you postpone a war (as long as it is genuinely a war of choice), but not when you postpone doing something about global warming.”
You only get away with this because you avoid the uncertainty problem of avoiding war. If I spot you London sinking into the ocean (a decidely unlikely event under even the more ridiculous global warming circumstances) you get to analyze the war from the unlikely circumstance of Saddam giving a nuclear weapon to terrorists who successfully explode it in New York City. I think the thermodynamics of that say something other than what you think it says.
As for waiting, the dangerousness level is not an even curve. Take North Korea. By following your advice and taking a wait and see approach, we now have a step-wise increase in dangerousness because North Korea used that time to successfully build nuclear weapons.
Thomas 12.09.04 at 8:18 pm
DD–if it’s true that, as some have argued (included Kip Viscusi, for example) that regulatory expenditures can lead to an increase in mortality, then the secondary effects of the restriction on carbon emissions (a regulatory expenditure) include things that can’t be undone. Death is, still, irreversible. (And uncertainty about whether this effect actually holds would, I gather, argue in favor of doing nothing, wouldn’t it?)
John Q–A lawyerly comment and then a real one: When someone hasn’t complied with a requirement, lawyers argue that they have “substantially complied.” In truth, in early 2003 the US and its allies had a number of options. The option that provided the most certainty was the invasion option. One can think of a variety of options for dealing with the threat of global warming, including a simple reduction in the amount of energy used to massive investments in nuclear power, and so on. But each of those options must be run through the same decision process. A costly and bad outcome–invasion–is still preferred to the uncertainty of the consequences in the scenario you described.
Dan Hardie 12.09.04 at 8:38 pm
‘time is on your side when you postpone a war’
It is, assuming that you, or your coalition’s, strength is not decreasing relative to the strength of your potential enemy. But as you like saying, Dsquared, ‘No, let’s not assume that’; let’s not assume it all the time, anyway.
Godwin’s Law and all that, but I am going to be sending JQ a lengthy and no doubt tedious email explaining, among other things, why there are good reasons for believing that the failure to tackle Hitler over the Rhineland re-occupation, the Anschluss, the Sudetenland crisis which led to Munich or the March ’39 occupation of Prague all progressivly strengthened Hitler vis-a-vis the French and British. I just had such a post mostly written and cleverly managed to delete it, so my immense audience will have to be patient.
Btw, before we even get to the question of ‘can preventative war be justified?’, we have to bear in mind the advice given on Dsquared Digest: ‘*Good ideas do not need lots of lies told about them in order to gain public acceptance*.’ The prewar ‘intelligence’ process seems to have been one long lie-fest, so in the context of Iraq, the discussion of preventative war seems to me to be beside the point: no-one at the top of the relevant bureaucracies made any effort to honestly find out if we needed to fight such a war.
Jake McGuire 12.09.04 at 8:40 pm
How is the flooding of London irreversible? Said flooding will not occur overnight; the technology to reclaim land from the sea is pretty well understood (e.g. Netherlands), so you put a dam across the Thames. I think you’re assuming your conclusion.
Dan Hardie 12.09.04 at 8:47 pm
‘On irrationality, Lizardbreath is right. At least one party must be irrational. Now apply symmetry. The occurrence of a war implies that somebody is acting irrationally and if you’re the one starting the war, it’s probably you.’
Irrationality of one party is one explanation for wars, but I don’t know if it’s the only one (using the very simplified, not to say simplistic, model that Posner is using.)
What if we accept the premise of rational actors acting upon
asymmetric or otherwise imperfect information? Does Posner, weak as he
is, assume all actors have access to perfect info?
Blainey was on to this problem in a non-mathematical way in his
(marvellous) book: he said that wars always began with both sides
believing they could win, so it was evident that one side had always
made a mistake. From my
reading of history, there are a very few cases where leaders go into war expecting to lose, but otherwise, I think Blainey is right: both sides always think
they will win, meaning somebody is always wrong.
I think there are three possibilities each time a war starts:
1) Both sides always think they will win because one of the two sides
is irrational:
basically, your argument.
2) Both sides always think they will win because both of them are
rational but one of them is basing its judgement of its war prospects
on imperfect information and the other is not: classic ‘asymmetric
information’.
1) might operate
in some cases, 2) might operate in others; there’s nothing beyond
modelling difficulties to say that they aren’t both operating
simultaneously a lot of the time, with some actors being rational,
some irrational, and all with access to imperfect info.
Dan Hardie 12.09.04 at 8:51 pm
‘Two possibilities’ because there was a third but I’ve developed Doubts about it.
Giles 12.09.04 at 8:56 pm
Another problem with the rank dependency is the lack of dynamics – you’re utility isn’t just determined by the outcome, it’s also determined by the process of getting to the outcome. Thus over the period your utility might be higher if you are over optimistic about the war which you then lose, than you are if your pessimistic during the whole war, but then get a good outcome and win. In other words following an SU approach, while not “mathematically†optimal may be the best approach which is why evolution has instilled it in us.
This is in part tied to the paper by Alloy, and Abramson who found that people who correctly evaluated the chances and probs of outcomes (i.e. used EU) tended to be more depressed than those who adopted a SU approach.
Dan Hardie 12.09.04 at 9:33 pm
Giles:’Thus over the period your utility might be higher if you are over optimistic about the war which you then lose, than you are if your pessimistic during the whole war, but then get a good outcome and win.’
This is a joke, right? In the context of a *war* (including, you know, deaths, maimings, destruction of property, invasion of territory and other minor disutilities of that nature) I might be *better off* if my country loses a war but we all spent the preceding few years misguidedly *feeling happy* about winning it, than if we won the war but everyone had been a little bit pessimistic?
ogmb 12.09.04 at 9:34 pm
On irrationality, Lizardbreath is right. At least one party must be irrational. Now apply symmetry. The occurrence of a war implies that somebody is acting irrationally and if you’re the one starting the war, it’s probably you.
Overestimating future payoffs is not (economically) irrational. It’s overestimating future payoffs. It’s also not a “cognitive bias towards overestimating the benefits (…)”, since the opposite, not engaging in war that would have been winable, is much harder to observe by its quality of being counterfactual. To establish Quiggin-irrationality you have to go beyond counting the false positives, you have to show they outweigh the false negatives.
Matt Weiner 12.09.04 at 10:02 pm
If I spot you London sinking into the ocean (a decidely unlikely event under even the more ridiculous global warming circumstances) you get to analyze the war from the unlikely circumstance of Saddam giving a nuclear weapon to terrorists who successfully explode it in New York City.
But then we also analyze the war from the somewhat less unlikely (ex ante) circumstance of the war angering Pakistani radicals to the point to which they assassinate Musharraf, take over the govt, give nuclear weapons to terrorists etc.
I think a point to be made here is that if we insist on concentrating on the worst possible outcome it will always be very very bad. (Do we know that there’s not a crazy god who will destroy the world if we do/do not follow a course of action. Yes we do, but never mind.) The kind of uncertainties at issue in global warming at least are such that we can say that irreversible consequences are more likely if we fail to combat global warming. (Even if we stipulate that flooding London is unlikely.) And I think it was pretty easy to tell ex ante that bad irreversible consequences were more likely if we went to war than if we didn’t.
Of course there’s a problem about how we define irreversibility, because no action is truly irreversible. I think that getting the US military stuck in Iraq for several years is somewhat irreversible. Certainly the deaths of people killed in the war are irreversible, but there are people alive today who would not be if we had not fought the war, and those deaths would also have been irreversible. So we need to look at a higher level.
Thomas 12.09.04 at 10:19 pm
Matt says “And I think it was pretty easy to tell ex ante that bad irreversible consequences were more likely if we went to war than if we didn’t.”
If it’s easy, why do we need John Q’s precautionary principle? Is this a rejection of the principle?
Ryan 12.09.04 at 10:29 pm
What about the moral case for acting instead of sitting idly by doing nothing, even though acting may cause some unintended consequences?
If a woman is getting beaten up across the street, and you decide to intervene, there is a chance that you will make the situation even worse and cause more harm to everybody. Should you walk on by to avoid that possibility? Or do you act?
Giles 12.09.04 at 10:34 pm
Dan Sure War isn’t a great subject for discussing decisions making under certainty , but it’s the subject that been choose. A more palatable example on my point would be sport; I enjoy watching and betting on rugby – but I never bet on my own team. Why?
Say I get 5 utils if my team wins and 0 if it doesn’t. suppose that watching the match gives me 2 utils if I feel optimistic that I’m going to win and 0 if I’m pessimistic. My payoffs are then PW 5, PL 0, OW 7, OL 2. Being optimistic is the better choice for utility purposes – I therefore choose to be optimistic about the chances of my team winning and I think that this is why a degree of optimism is breed into peoples genes and why SU is such a powerful idea.
However when comes to betting, with optimism I loose money as I overweight my team’s chances of winning. I’d break even, or make money if I followed EU. EU is therefore the right business approach, optimism the “natural approachâ€. EU makes me money, SU makes me happy.
Now with the war JQ, I think, seems to be arguing that people are naturally optimistic about war prospects when they should really adopt business like EU approach, the idea being that people will then tend to go to war less. But this overlooks the dynamics. Maybe being optimistic about war prospects is naturally built into people for evolutionary reasons too, because its good for them in the long term. If optimistic people are more likely to win wars then optimistic people ultimately take over the world. More interestingly, if you then implement a system which makes you alone more pessimistic about your war prospects that increases the chances that you’ll be attacked – i.e. it increases the chances of war.
Secondly pessimism may change behavior; by 1943 the Third Reich I think knew they were going to loose the war – but this did not make them any less brutal – in fact the level of brutality increased once they knew they were going to loose.
So I’m not 100% certain that adopting a more rational approach the best. As someone above argued, a simple moral approach may be better.
Deb Frisch 12.09.04 at 11:06 pm
JQ: On irrationality, Lizardbreath is right. At least one party must be irrational.
DF: This is still up for grabs. I’d wager $100 that it’s pretty easy to come up with a game theoretic toy problem a la Becker and Posner where war is rational.
Two rational, ethical agents probably would never engage in war. But rational agents, without the assumption of ethical probably would.
JQ: Now apply symmetry.
DF: Huh?
JQ: The occurrence of a war implies that somebody is acting irrationally and if you’re the one starting the war, it’s probably you.
DF: Yikes. This is an oversimplification.
dsquared 12.09.04 at 11:19 pm
you get to analyze the war from the unlikely circumstance of Saddam giving a nuclear weapon to terrorists who successfully explode it in New York City.
The fact that this completely untrue scenario seemed to form a fairly substantial part of the calculation about going to war looks to me to be a pretty good example of the cognitive bias part of the incompleteness principle.
dsquared 12.09.04 at 11:20 pm
you get to analyze the war from the unlikely circumstance of Saddam giving a nuclear weapon to terrorists who successfully explode it in New York City.
The fact that this completely untrue scenario seemed to form a fairly substantial part of the calculation about going to war looks to me to be a pretty good example of the cognitive bias part of the incompleteness principle.
Thomas 12.09.04 at 11:55 pm
DD–I don’t think that’s right. If both courses of action (invasion and something else) were innovations (i.e., the status quo isn’t a choice), then estimates about the likelihood of Saddam giving a nuclear weapon to terrorists would be incomplete (because the likelihood was uncertain), and thus over-optimistic. It goes exactly the other way, it seems to me.
The only way to avoid that conclusion is to posit that we knew that this scenario was “completely untrue” (and thus certain). But we didn’t.
Sebastian Holsclaw 12.10.04 at 12:07 am
That is quite a dodge considering how often we have to contend with how often the highly unlikely scenarios get high profile play in the global warming arena.
The London underwater probability is certainly in the marginal/vanishing probability zone. The chance that Saddam if left alone would have eventually built a nuclear device is I would say excellent, and that he would pass it off to a terrorist organization is certainly non-zero. You raised the London underwater possibility as part of the discussion.
You either know very little about global warming or are aware that the probability of your scenario is vanishingly small. I’ll assume that you weren’t making a point about completely ignorant bathering in reasoned calculations so you knew it was vanishingly small.
So……other than the moral of taking swipes at the pro-war crowd, did you have a point about making choices based on incredibly low probability but really bad scenarios? Because at the moment it appears that you employ the precautionary principle for advancing leftist aims and some other principle when critiquing everybody else.
Russell L. Carter 12.10.04 at 12:12 am
“The only way to avoid that conclusion is to posit that we knew that this scenario was “completely untrue†(and thus certain). But we didn’t.”
So, overflights plus super secret sensing spy satellites plus boots on the ground wielding geiger counters wasn’t enough?
I mean, Iraq is in a desert! It doesn’t even have cloud cover.
John Quiggin 12.10.04 at 12:36 am
“DF: Yikes. This is an oversimplification.”
Well, yes, but it was only a blog comment. I’ll try to spell it out, but my point is one I’ve made in the previous post.
If you are making your plans on the assumption that you are a rational optimiser while your (potential) opponent is a crazy monster, and you know that your opponent is doing likewise, should you conclude “that just proves what a crazy monster he is” or “maybe neither of us is as rational as we think”. I prefer the latter.
cac 12.10.04 at 2:11 am
As no one else seems to have, could I point out that contrary to Dan Hardie’s suggestion, Blainey is still very much alive (or was two days ago when he published an article on the Eureka Stockade in the The Age)? Perhaps Mr Hardie is thinking of the late Manning Clark who is both dead and much inferior as an historian.
dsquared 12.10.04 at 7:40 am
The chance that Saddam if left alone would have eventually built a nuclear device is I would say excellent
This is precisely why I didn’t recommend leaving him alone, and said so, in writing, repeatedly. I’m actually now quite disappointed because I thought you were a regular reader, Sebastian.
Global warming is not a “leftist” issue and I don’t really understand how anyone might think it could be. Political policies can’t affect the level of the ocean; this is a sociological fact which was proved by experiment by King Canute. I’m referring to London being underwater because I have a map in front of me which I cut out of the Standard, showing all the postcodes where it is now very difficult to get flood insurance because on latest modelling the existing Thames barrier won’t be able to save them in a few decades time. It doesn’t include my own house, but there are several billions worth of real estate in the cross-hatched area, which houses are presumably fucked whether they vote Labour or Conservative.
To recap, my policy projections were:
Problem: An underplanned and poorly resourced war might go really badly wrong.
DD solution: Don’t start a war until you’ve planned it properly.
Alternative solution: Start the war anyway because we might be lucky.
Problem: Saddam might blow us all up with a nuke.
DD solution: Put inspectors in place to make sure he doesn’t.
Alternative solution 1: Don’t bother because we might be lucky.
Problem: Global warming might flood us all.
DD solution: Do something about it.
Alternative solution: Keep on burning vast amounts of hydrocarbons because we might be lucky.
Problem: Reducing hydrocarbon consumption might plunge the world into recession.
DD response: Start off with a sensible small reduction like the Kyoto treaty.
Alternative response: Ban all hydrocarbons today, because we might be lucky.
You see that the common thread here is that the “DD response” usually involves taking precautions today against an identified future danger, while the “alternative response” tends to rely on assuming that unknown outcomes will be favourable.
Dan Hardie 12.10.04 at 1:06 pm
Yikes, Blainey is indeed alive. I think I got the impression he was dead from a ‘more in sorrow than anger’ style piece on him by Stuart Macintyre, which spoke of him throughout in the past tense. But yes, should have checked. He has written some excellent books, which everyone should read. I wasn’t thinking of Manning Clark, who is dead, was an appalling prose writer and a dreadful Marxist-Leninist and is frankly not taken very seriously outside Australia (if he is still taken seriously there).
Andrew Boucher 12.10.04 at 3:50 pm
Problem: A problem.
DD Solution: Solve the problem!
Alternative solution: Do nothing.
Yes, well if it were only that easy, and if it were only what we were talking about in the first place.
Deb Frisch 12.10.04 at 5:02 pm
Quiggin deems: On irrationality, Lizardbreath is right. At least one party must be irrational.
No proof has been offered for this conjecture. If we differentiate rationality (maximizing expected utility of one’s action) from ethics (including other people’s utility in your utility function), it is easy to construct a scenario where two rational agents engage in war. T
he more selfish rational agent wants to acquire the land and other resources of the less selfish rational agent. It thinks it can win and is willing to sacrifice some soldiers and money to acquire the resources. So it attacks. The attacked agent reciprocates in self-defense. Voila, a war between two rational agents.
Earth to Quiggin: “At least one irrational agent” is not a necessary condition for war.
Sebastian Holsclaw 12.10.04 at 5:45 pm
Kyoto is a mild step? What were the mid-range estimates? $700 billion or so? For delaying global warming 1-3 years? The entire cost of Social Security in a year could go a long way in making things better for people in non-speculative ways.
Dan Hardie 12.10.04 at 7:04 pm
Re Kyoto, I’ve always wondered if the real benefits don’t include the fact that it will put in place, and test, a worldwide carbon-emissions caps system (and consequent measurement, enforcement and cap-trading mechanisms). Current economic theory does say that caps and trading schemes are the best way to reduce harmful emissions, but we don’t know if such a scheme can work on a massive, international basis across many different countries and legal systems, as opposed to within one legal system, like the EU or the US. Secondly, even if it can work, the chances are that there will have to be a lot of ‘learning by doing’ before it works anywhere near optimally. I suspect Kyoto, now that it is off the ground, may function mainly as a feasibility test for a much larger successor scheme. This isn’t a rhetorical point: I’m genuinely interested as to whether this might prove to be the main result, intended or unintended, of Kyoto.
Dan Hardie 12.10.04 at 7:15 pm
Re Kyoto, I’ve always wondered if the real benefits don’t include the fact that it will put in place, and test, a worldwide carbon-emissions caps system (and consequent measurement, enforcement and cap-trading mechanisms). Current economic theory does say that caps and trading schemes are the best way to reduce harmful emissions, but we don’t know if such a scheme can work on a massive, international basis across many different countries and legal systems, as opposed to within one legal system, like the EU or the US. Secondly, even if it can work, the chances are that there will have to be a lot of ‘learning by doing’ before it works anywhere near optimally. I suspect Kyoto, now that it is off the ground, may function mainly as a feasibility test. If it works, with amendments, then it can form the basis for a larger successor scheme; if it doesn’t work then we have to either design an entirely different international emission control system, or start the bargaining necessary to amend international law and strengthen international co-operation such that an international capping scheme is feasible.
This isn’t a rhetorical point: I’m genuinely interested as to whether this might prove to be the main result, intended or unintended, of Kyoto.
John Quiggin 12.10.04 at 8:40 pm
Deb, I’m using “rational” in the sense in which Posner uses it, which doesn’t allow for the kinds of responses you suggest.
This covers the asymmetric info point raised by Dan. The economist’s notion of rationality covers all this. Suppose I threaten my opponent with war unless she agrees to some demand, having calculated that it is her interest to acquiesce, but she refuses. Given common knowledge of rationality, I infer that she must have information I don’t have, and therefore revise my beliefs. With full common knowledge of rationality, our beliefs converge.
To restate, I don’t believe people are rational in Posner’s sense, I’m merely pointing out that his analysis makes no sense on his own assumptions.
Deb Frisch 12.10.04 at 9:06 pm
John,
All Posner said was “A rational decision to go to war should be based on a comparison of the costs and benefits (in the largest sense of these terms) to the nation.” He did not say that a nation should give equal weight to the well-being of the citizens of all nations.
Your claim that rational nations will never choose to go to war with each other needs to be defended, not merely stated repeatedly.
John Quiggin 12.10.04 at 11:15 pm
Dan, I entirely agree with your discussion of Kyoto.
Deb, if you look at dsquared’s comments they spell out the argument in more detail. It doesn’t much matter what the objective functions of the parties are, as long as they are rational in the way in which economists use these terms and which Posner needs for his argument.
To restate, going to war is Pareto-dominated by an agreement matching the final outcome, but without any of the intervening bloodshed and destruction. Rational actors, in the sense required by Posner, can always identify this outcome beforehand, and will therefore never go to war.
Reading this, I note a need to qualify my position a little. Actors who positively value death and destruction can rationally (in the Posner sense) go to war.
Deb Frisch 12.10.04 at 11:40 pm
Dear John,
I’d been trying to avoid having to read d-squared’s comments (my time spent blogging is already > optimal). And I doubt (p is less than .05) that d^2 has proved that “two irrational parties are necessary for war” any better than you have (i.e., not at all).
But you’re the main man here, JQ. So I’ll check out d^2 and get back to you. If you could direct me to the particular post of the prolific d^2 that addresses my concern, i’d be most grateful.
Yours,
Deb
John Quiggin 12.11.04 at 2:02 am
Deb, as has been said several times, the proposition is “at least one irrational party is needed”. This really is standard stuff, whcih is why my original statement of the proposition was so terse.
Here’s DDs statement from the comments on my previous post. As you can see, he had no trouble recognising that I was stating a standard result and filling in the details.
OGMB, since wars by definition involve the diversion of productive resources into producing expensive pieces of capital equipment which are delivered to people who don’t want them and then explode, they can’t be positive or zero-sum events. Any war is Pareto dominated by a contract under which both parties agree to the outcome that the war would have produced without fighting.
Palooka 12.11.04 at 4:09 am
I’d like to see you apply this so-called principle to global warming. Wait, that would require that “wait and see” alternative.
John Quiggin 12.11.04 at 6:03 am
Palooka, read the linked paper, which addresses this question.
ogmb 12.11.04 at 11:29 am
Deb, it seems that JQ and DD are trying to invoke the Nash bargaining solution. I can’t really argue with that as it is in fact “standard stuff”, but it would have been easier if they’d just referred to it rather than the ongoing handwaving and poorly worded defenses. And no, the Nash bargaining solution doesn’t establish that rational actors don’t go to war. It only establishes that they don’t go to war if bargaining is an option. Also, JQ started out by criticizing Posner for not incorporating that the other side might be acting rationally. Now he claims that Posner’s “analysis makes no sense on his own assumptions”. But all we’ve seen from JQ and DD is that war doesn’t happen if both sides act rationally, which is something Posner doesn’t assume.
Deb Frisch 12.11.04 at 5:08 pm
“Any war is Pareto dominated by a contract under which both parties agree to the outcome that the war would have produced without fighting.”
How do you know what outcome the war will produce without fighting it?
And why do economists have so much faith in a theory of interpersonal relationships developed by a paranoid schizophrenic (John Nash)?
Jason 12.11.04 at 5:56 pm
Thomas Schelling ‘Strategy of Conflict’
A great book, and at one point he describes what seem to me situations where rational people can commit to war.
For instance, it is sometimes rational to bind yourself to your statements/threats, so that the other party will be forced to concede because, yes, you will destroy the entire planet if you don’t get the last piece of Thanksgiving pie. Problems can arise when two parties bind themselves in incompatible ways (for instance, due to communication channel errors/delays).
Deb Frisch 12.11.04 at 6:45 pm
Yup, Jason, it’s easy to find examples of game theorists who disagree with JQ’s assertion that rational agents do not engage in war.
Here are a few more:
Although Von Neumann appreciated Game Theory’s applications to economics, he was most interested in applying his methods to politics and warfare… He used his methods to model the Cold War interaction between the U.S. and the USSR, viewing them as two players in a zero-sum game.
http://cse.stanford.edu/classes/sophomore-college/projects-98/game-theory/neumann.html
There are two Nash equilibria in this final move, Mutual Doomsday and Mutual Backdown.
http://www.rh.edu/~stodder/BE/IntroGameT.htm
I would love to see a reference for JQ’s claim that the proposition is “at least one irrational party is needed†is “standard stuff.”
John Quiggin 12.11.04 at 7:31 pm
Looking at this statement from Deb
“How do you know what outcome the war will produce without fighting it?”
I think we are arguing at cross purposes here.
If you go back to Posner’s original post, you’ll see that it depends on the assumption that the outcome of any strategy can be predicted in advance. This is part of the assumption of rationality standard in game theory.
I make the point that
(i) people aren’t as fully informed as this or rational in the way assumed by Posner
(ii) if they were, they wouldn’t fight wars
and conclude that using game theory to justify preventive war is a silly idea.
Deb seems to agree with the conclusion, but also wants to challenge (ii) on a variety of grounds that aren’t clear to me.
So Deb, if you think (ii) is wrong in a way that validates Posner’s argument please say so. Otherwise, insert the necessary technical conditions (common knowledge of rationality, Bayesian common priors, unbounded computational capacity, ability to monitor commitments and so on) that make (ii) formally correct and observe that Posner is implicitly relying on all of these conditions.
Kevin Donoghue 12.11.04 at 10:12 pm
“If you go back to Posner’s original post, you’ll see that it depends on the assumption that the outcome of any strategy can be predicted in advance. This is part of the assumption of rationality standard in game theory.”
I know nothing about Posner. Since he gave no clue where his assumptions came from, I assumed he plucked them out of the air (to put it politely). Perhaps the difference between JQ and Deb is that the former gives Posner undue credit. (Maybe the SSRN paper justifies JQ’s faith in Posner but I don’t have access to that.)
Deb Frisch 12.11.04 at 11:58 pm
John,
I am really not sure why this is so difficult. You made a very specific claim:
“According to game theory, two rational agents will never choose war.â€
I disagreed. I provided a counterexample – a plausible scenario in which Nation A (e.g., USA) would threaten to wage war against Nation B (e.g., Canada) unless Nation B did x, y and z and Nation B said “Bring it on†and Nation A said “You got it.â€
Instead of responding to my counterexample to your conjecture, you said:
“I think we are arguing at cross-purposes.”
Sometimes you wave your hands in the direction of “what everyone knows is true about game theory.†Other times you wave your hands in the direction of “Posner’s definition of rationality.†When that fails, you resort to deflection and suggest our disagreement is illusory.
You seem to think that Posner and game theorists in general believe that people are omniscient. You seem to think that game theorists think that the payoffs in the cooperation/defect matrix can be specified with certainty.
In a prisoner’s dilemma, this might be true. The warden says “If you say X and he says Y, you go to prison for 4 years and he goes to prison for 8; if you say Y and he says Y, you both go to prison for 2 years, etc.†In the real world, the consequences/payoffs are not known with certainty.
Even though I think game theory is kooky, I don’t think it’s as kooky as you do. I think that game theory would say that the consequences associated with waging war are uncertain. The consequences are very different depending on whether you “win†or “lose†and this is up for grabs unless and until you actually wage war.
I provided a blurb about von Neumann showing that like me, he thought that a rational agent might choose to go to war.
I agree with you that in general, nations that choose to attack others are almost always irrational. I think Sam’s nuts, Saddam’s kooky and Osama’s certifiable. It’s an empirical fact that most people who wage war are irrational.
But the British citizens who lived in North American in the mid-1700s who chose to wage war against their government were not irrational.
The fact that most warmongering nations are ruled by lunatics is an empirical fact, not an analytic truth.
ogmb 12.12.04 at 12:11 am
This is part of the assumption of rationality standard in game theory. I (…) conclude that using game theory to justify preventive war is a silly idea.
But Posner doesn’t even use game theory. He sets up a simple decision making under uncertainty example in which the adversary acts as force of nature. He then offers a sensitivity analysis over the variable “imminence (= probability) of attack” and concludes that under carefully chosen parameters the choice alternative “preventive attack” becomes expected payoff maximizing at an imminence level lower than one. This is crude stuff, but his use of the word rationality is justified here, because in this context it doesn’t mean more than choosing the action with the best expected payoff given ones expectations.
Btw, Posner posted an update on their blog where he responds to some of the criticism. Nothing relevant to this discussion though.
John Quiggin 12.12.04 at 12:53 am
I’m obviously doing something wrong here, so I’ll just restate that I’m disagreeing with Posner, not agreeing with him and leave it at that.
John Quiggin 12.12.04 at 2:47 am
Sorry for getting frustrated. I’ll try one more time. There are some standard game theoretic conditions under which the proposition “No two rational players will choose a Pareto-dominated outcome” is true. For example, these include full Bayesian rationality (not omniscience – the argument copes fine with uncertainty as long as the players are EU maximisers) unbounded reasoning capacity, and free communication between players.
I claim that all of these conditions are either required to be satisfied for Posner’s example to work at all (for example, full Bayesian rationality is needed to work out the numbers in his toy example) implied by symmetry when we take account of the existence of another player (common knowledge of rationality) or crucial for the normative relevance of Posner’s claims (for example, it’s true that communications problems like those in the Prisoners Dilemma can lead to a Pareto-dominated Nash equilibrium, but the obvious, and standard, conclusion is to improve communications, not to fight wars).
If you accept all the above, then the occurrence of wars is evidence that the premises necessary for Posner’s argument to work properly are not satisfied, something we know anyway from experimental evidence and introspection.
So, I reach the conclusion that it is unwise to apply Posner-style reasoning to wars.
Deb Frisch 12.12.04 at 10:26 pm
JQ: There are some standard game theoretic conditions under which the proposition “No two rational players will choose a Pareto-dominated outcome†is true. For example, these include full Bayesian rationality (not omniscience – the argument copes fine with uncertainty as long as the players are EU maximisers) unbounded reasoning capacity, and free communication between players.
DF: Great. Now we’re getting somewhere. I’m 98.235% sure that we agree that two omniscient rational agents would never fight a war, because they’d just make a contract for the end state and skip the blood and gore. You can fancy it up and call it a Pareto optimal Nash equilibrium, but that would be kind of silly, since it’s an utterly trivial “result in game theory,†given the impossible, implausible assumption. Glad to see you’ve abandoned this line of reasoning.
Now you’ve redefined the claim “At least one irrational agent is necessary for war” to mean “Two rational Bayesian expected utility maximizing decision makers will never engage in war.”
This is a more interesting hypothesis than the one about omniscient agents, but it still seems to me it’s blatantly false. My USA/Canada war example assumed rational Bayesian expected utility maximizers. I think it is easy to construct scenarios where two subjective expected utility maximizers (as you know, this implies they are Bayesians) choose to engage in war.
The hypothesis on the table is:
Will SEU maximizers ever choose to engage in war with each other?
I said yes and I provided an example. JQ says no, and provides more handwaving. He says “the argument copes fine with uncertainty as long as the players are EU maximisers.” This is a tad sketchy – what exactly is the argument for why Bayesians won’t engage in war?
I am 97.59% sure that if you asked mutually acknowledged experts in Bayesianism whether the new version of Quiggin’s conjecture is true (e.g., Shafer, Edwards, von Winterfeldt, Clemen, Hacking (is he still alive?), etc.), 100% would say two Bayesians might choose to duke it out, even if we allow the false assumption of “unbounded reasoning capacity.”
ogmb 12.13.04 at 7:34 pm
Shorter JQ: If I assume away all obstacles to Pareto optimality, Pareto optimal outcomes are inevitable.
Katz 12.13.04 at 10:07 pm
[This post refers to a discussion on JQ’s blogsite. Sorry about the discontinuity and my sympathies regarding JQ’s problems with the lunacies and criminalities of cyberspace.]
JQ wrote:
“If the expected benefits of investment are large enough, they outweigh the gain from waiting.”
According to the model proposed, if the expected benefits are large enough, I would have thought that the aggressor will decide for war at period (n).
I’m trying to think of conditions that might improve the expected rewards of war within realistic time constraints.
Some possibilities:
1. Discovery of new resources.
2. Development of new technologies that cause a revaluation of already known resources. (Mesopotamia, for example, was just a bunch of sand in the eyes of Westerners until the discovery of uses for oil.)
3. The lure of a tied market for domestic industries. (There was a huge literature on this aspect of imperialism spearheaded by Lenin and Hobson.)
4. Domestic political benefits arising from acting on demonisation of the enemy regime or the politico/cultural arrangements promoted and/or imposed by the enemy regime. (The Cold War and its episodic hot spots, such as the Vietnam War, conforms to this pattern.)
But with the exception of the lure of tied markets, it seems to me that these benefits develop too slowly to be encompassed in a coherent decision-making sequence. (This question of the persistence of memory and the framing of a coherent purpose may need to be added to any robust model for decision-making of the type that leads countries to form foreign policies, including starting wars.)
Deb Frisch 12.14.04 at 4:50 pm
ogmb: Shorter JQ: If I assume away all obstacles to Pareto optimality, Pareto optimal outcomes are inevitable.
DF: Nice – but it’s actually worse than that. If I assume away all obstacles to Pareto optimality, Pareto optimal outcomes are probable.
JQ1:
Theorem: Two omniscient agents will never fight a war.
Proof: A rational agent will never choose a Pareto-dominated option. War is dominated by the would-be warriors divvying up resources (e.g., land, water, horses, women) in a way that corresponds to the final outcome of the war.
DF: How do you know the outcome of the war unless you fight the war? JQ’s proof implicitly assumes omniscience. No one, including the most delusional game theorists, assumes that humans are omniscient.
JQ2:
â€There are some standard game theoretic conditions under which the proposition “No two rational players will choose a Pareto-dominated outcome†is true. For example, these include full Bayesian rationality (not omniscience – the argument copes fine with uncertainty as long as the players are EU maximisers) unbounded reasoning capacity, and free communication between players.â€
Theorem: Two agents who:
a. are SEU maximizers (DMs that maximize SEU automatically will be Bayesians)
b. have “unbounded reasoning capacityâ€
and
c. have free communication with each other
will never choose war.
Proof: Not provided.
Again, there’s a blatantly false assumption. We’ve dropped “omniscient†but added “unbounded reasoning capacity.â€
But even if we grant JQ2’s premise b, it’s possible to construct a scenario where these two DMs would choose war. That is, in JQ1, a blatantly false assumption (omniscient) was sufficient to prove the theorem. In JQ2, the bfa (unbounded reasoning capacity) is not sufficient to prove the theorem.
A Bayesian with unbounded reasoning capacity would have the capacity to assign coherent probabilities to states of the world in a way that reflects all of the evidence available.
Let’s imagine the two adversaries occupy contiguous pieces of land. Group A wants more land. They are too crowded, have to ration water and food, etc. Although the A people have ethical system that prohibits one A from killing another A, the system does not apply to B’s. The Bs are perceived to be subhuman because the B-folks worship the sun and A-folks worship an alleged guy in the sky.
So the A folks decide they will attack the B folks, kill them all and take their land. Since there are 10 times as many A folks as B folks, they think the chance of success is high.
So A folks say p(A win a war against B)=.8
They rate the utility of the status quo as 0. Let’s say U(win)=10, U(loss)=10. [If they win, everyone’s better off. If they lose, some people are worse off (the dead soldiers) and everyone else is the same.]
So EUwar= 8-2=6
EUnowar=0
A prefers war.
It’s tempting to say that if A and B are both Bayesians, with unbounded reasoning capacity and the ability to communicate with each other, the B folks would also say p(A win a war against B)=.8.
And it’s tempting to say if EUwar>EUnowar for A, then EUnowar>EUwar for B so B would surrender once A showed it was serious about burning some gunpowder.
I think both assumptions are false. There is no reason that A and B need to assign the same probabilities to p(A will win the war). Any “facts†that A tells B are not 100% trustworthy. Ditto for any “facts†that B tells A. So even if A and B “freely communicate,†they will not converge on a single probability that A would win a war.
Also, the utilities are different for B. If the status quo =0, u(losing) approaches negative infinity. U(win)=0. No matter how low the probability of winning, a tiny chance of 0 is preferable to a huge chance of negative infinity. [This has the same flavor as Pascal’s wager.]
So EUwar>EUnowar for A.
And though B preferred no war, once A aggresses, EUwar>EUnowar for B too.
I think JQ wants it to be true that rationality is sufficient to avoid war. I wish it were true also. But I don’t think that it is.
Comments on this entry are closed.