I’ve never understood a lot of the attraction behind game theory. In particular, I’ve never heard a convincing argument for why Nash equilibria should be considered especially interesting. The only argument I know of for choosing your side of a Nas equilibria in a one-shot game involves assuming, while deciding what to do, that the other guy knows what decision you will make. This doesn’t even make sense as an idealisation. There’s a better chance of defending the importance of Nash equilibria in repeated games, and I think this is what evolutionary game theorists make a living from. But even there it doesn’t make a lot of sense. In the most famous game of all, Prisoner’s Dilemma, we know that the best strategy in repeated games is not to choose the equilbrium option, but instead to uphold mutual cooperation for as long as possible.
The only time Nash equilibria even look like being important is in repeated zero-sum games. In that case I can almost understand the argument for choosing an equilibrium option. (At least, I can see why that’s a not altogether ridiculous heuristic.) One of the many benefits of the existence of professional sports is that we get a large sample of repeated zero-sum games. And in one relatively easy to model game, penalty kicks, it turns out players really do act like they are playing their side of the equilibrium position, even in surprising ways.
Testing Mixed Strategy Equilibria When Players Are Heterogeneous: The Case of Penalty Kicks in Soccer (P.A. Chiappori, S. Levitt, T. Groseclose). (paper, tables) (Hat tip: Tangotiger)
Some of you will have seen this before, because it was published in American Economic Review, but I think it will be news to enough people to post here. The results are interesting, but mostly I’m just jealous that those guys got to spend research time talking to footballers and watching game video. I haven’t heard any work that sounded less like research since I heard about that UC Davis prof whose research consists in part of making porn movies.
Like you, but maybe for different reasons, I’m sceptical about Nash equilbrium concepts and therefore about the scope of game theory. The big problem is that in most real-world applications, there is no clearly defined set of strategies that can be adopted, which means that the modeller has to guess what the strategies are before Nash equilibrium can be defined.
The big exception is that of formalised games like poker and chess, where there really a fixed set of moves. Sports generally aren’t like this, but some situations, including penalty kicks, are pretty close - there are only four or five options on each side.
But then, in zero-sum games, you don’t generally need Nash equilibrium - the maximin concept is more robust.
I have a paper on this which I’ll try to dig out and cite in a post - given my experience with critical pieces, I don’t expect to see it in the American Economic Review any time soon.
Another supporter for the question of “what is so interesting about Nash equilibria?”
The part of game theory research I have found most appalling (while dull) in recent years in the computer science research into the field.
Lo and behold, what have they found? In many “games”, i.e. scenarios they map to games, the equilibrium position isn’t the best possible strategy—and people don’t play it. What do they take away from this? That they need to investigate why humans would behave so irrationally!—not that perhaps, their game theory notions are incorrect.
Brian: “I’ve never understood a lot of the attraction behind game theory. In particular, I’ve never heard a convincing argument for why Nash equilibria should be considered especially interesting.”
John: “Like you, but maybe for different reasons, I’m sceptical about Nash equilbrium concepts and therefore about the scope of game theory.”
Two quick thoughts:
1. game theory isn’t exhausted by the Nash equilibrium concept: there are other important solution concepts for both noncooperative and cooperative games.
2. Nash is remembered not merely for his equilibrium concept, but for his solution to the bargaining problem, and as Martin Osborne has so quaintly put it, “the only connection between the Nash solution and the notion of Nash equilibrium … is John Nash.”
Being (justifiably) sceptical of the Nash equilibrium concept as an explanatory tool is not, then, obviously equivalent to being (justifiably) sceptical about game theory.
You read Primer?!?
Phil Mirowski makes the excellent point that the Nash equilibrium concept is a pretty good way of modelling the decision process of a paranoid schizophrenic, and that given this, more attention should probably be paid to the fact that Nash was one.
Andrew, I think Primer is one of the best procrastination sites on the web. And I don’t know how I would have survived last October without Sox Therapy.
Loren, I agree there’s much more to game theory than just what Nash brought in, and you’re right that some of that can be useful in various ways. In particular, I think some of those concepts can be descriptively useful. But I don’t think they’re of much prescriptive use for a couple of reasons.
First, most of the more sophisticated concepts are still equilibrium concepts (either Nash plus epicycles or some other equilibrium concept) and the normative import of such concepts is unclear at best.
Second, it’s very hard to find an argument that decision problems involving other people should be treated differently to decision problems involving uncertainty generally. I think it’s impossible to justify any game theoretic advice unless it can be shown to also follow from a plausible decision theory. Having said that, Nash equilibrium can be a useful heuristic in trying to solve a decision problem. And decision problems when other people’s reactions to your decisions are at issue can be very hard. Often the best we can do (in practice or even in theory) is use heuristics.
But still, in some of these cases even the simplest game theoretic advice - avoid iteratively dominated strategies - can be very bad advice, as in Centipede or Iterated Prisoners Dilemma. So I’m still a little sceptical about the value of such concepts.
In the most famous game of all, Prisoner’s Dilemma, we know that the best strategy in repeated games is not to choose the equilbrium option, but instead to uphold mutual cooperation for as long as possible.
Yes, “we know” the best strategy, and organizations attempt to enforce the best strategy (e.g. the Mafia code of silence, etc.), but the actual behavior of thousands and thousands of criminals bears out that the Prisoner’s Dilemma is pretty accurate about how most people will act, in spite of the “best” solution, even in repeated games (since almost all criminals are multiple repeat players in the system).
On the other hand, one of the most interesting things is how situations that appear to be the same are not, and how that affects human reactions.
For example, if one has two opera tickets worth fifty dollars each and two hundred dollars in their pocket and are on a date to the opera, if the tickets are lost or stolen, one will probably not replace them. If one has three hundred dollars instead, and loses a hundred of it, they will probably buy the tickets anyway. Dramatically different responses to the same loss.
Or consider if you are walking out the door of your local grocery store. If you are normal, if you win an unforseen prize for being the “xzy” customer, and are offered a “double or nothing” bet, you will probably turn it down.
If walking out of the door you discover that you had not set your parking brake and caused the same amount of damage (in dollars) to someone’s car as the prize would have been, you are more likely than not to take the double or nothing bet.
That is, same economic change, but when it is the chance to avoid loss, you will take risk, when it endangers profit already found, you won’t (think of it as risking the bird in the hand for two in the bush vs. taking a dangerous jump to avoid being bitten).
Interesting stuff.
Stephen
beginning blogger
http://ethesis7.blogspot.com/
“it’s very hard to find an argument that decision problems involving other people should be treated differently to decision problems involving uncertainty generally.”
Here is one, I think. Perhaps you could comment.
I assign a case study in my course that involves bidding on a ship. In the case description, it gives the decision-maker’s probability assessment of how high the other bidders will bid.
The solution to the case as published goes something like the following.
- The ship is worth about $12 million to me.
- If I bid $8 million, then I calculate a 95% chance of winning, based on my earlier probability assessments.
- Therefore, a bid of $8 million nets me a good “profit,” and, given all the info in the case, it gives the maximum profit.
This is supposedly the solution to the case.
However, I think the solution actually is “my assessment of the other bids is probably wrong; if I can so easily make money, why aren’t the other bidders doing the same analysis and bidding higher?
“Perhaps I have a good reason for believing my own assessments: the other bidders are stupid, they value the ship less than I, are more risk averse, etc.
“But if I don’t, maybe the following is a good idea. I will do the analysis again, this time assuming that the other bidders are doing the same kind of calculations as I am.”
Does this lead to standard game-theoretic results?
Bill, the game theory tools might help here, but they are more likely to overcorrect. Surely there is some positive probability that you have different values to the other players, or have more information than the other players, or have simply outsmarted the other players. If so, you don’t want to be caught using tools that assume these things are impossible.
It seems the best thing to do here is simply conditionalise on the evidence that you seem to have been giving a large windfall. If the posterior probability is that this is almost certainly illusory don’t bid, if it looks for real, then bid. But you can do all this without leaving old-fashioned decision theory.
Of course, as Daniel and others have pointed out here, this kind of reasoning might break down when there are too many iterations involved. I’m inclined to say in those cases there are no good solutions, not that we need to go beyond decision theory to find the solution. Sometimes it really does all come down to animal spirits.
Bill, this is the famous winner’s curse”.
More goal-directed research.
“Bill, this is the famous winner’s curse”.”
I think that is a separate issue.
My understanding is that, in the winner’s curse, the value of the item is uncertain, and different bidders have different information about it. The bidder who most overestimates the item’s value is the most likely to win.
This isn’t what happened in this case study; the ship’s value was uncertain, but no bidder had more information about the value than any other. (This is just how the case was written; no reason it couldn’t be written otherwise).
The decision-maker is underestimating the other bids, not overestimating the value of the ship. So this doesn’t seem like a winner’s curse situation (more sort of “loser’s stupidity” :-)
Brian,
My understanding of decision theory is that it never recommends a mixed strategy.
Would you then never suggest a mixed strategy to someone in a one-time game? What about a multiple-time game (or whatever it is called)?
I flip back and forth on this issue, and would like to hear your views.
In a one-time game I wouldn’t think randomisation (i.e. using a mixed strategy) is ever particularly valuable. If it’s very important that other people not be able to predict what you do, and you think (for whatever reason) that the only way to ensure this is to randomise, then maybe it’s worth it. But I don’t think that’s the normal case; I think the normal case is one where either there is a best strategy, or nothing, not even randomisation, will lead you to an optimal solution.
In zero sum games (the topic at hand), mixed-strategy equilibria are the norm rather than the exception. Since the other side wants to hurt you, it’s normally good to deprive them of information.
The example of penalty kicks illustrates. Someone who always went for the top right corner, no-matter how well they kicked, would not score many goals.
I should qualify this a bit. There are plenty of zero-sum situations where either one side has a forced win, or there is an obvious and symmetrical dominant solution. But these are not generally considered interesting by game theorists.
In the most famous game of all, Prisoner’s Dilemma, we know that the best strategy in repeated games is not to choose the equilibrium option, but instead to uphold mutual cooperation for as long as possible.
I don’t get the point here. I would say that there is no “best strategy” in a repeated prisoners’ dilemma. Certainly there is no dominant strategy. Meanwhile, “upholding mutual cooperation for as long as possible” often is a Nash equilibrium in a repeated prisoners’ dilemma. (We should be more precise about the exact form of the game and of the meaning of ‘as long as possible’, but let’s set that aside.)
Brian and Daniel,
“Of course, as Daniel and others have pointed out here, this kind of reasoning might break down when there are too many iterations involved.”
When I studied game theory, we didn’t do any iteration. I am quite confused by Daniel’s and your interpretations of game theory as somehow involving them.
The idea was always to simultaneously solve for both players’ strategies, not to start with one person’s strategy, see how the other would react, then re-react, etc. and see if it converges.
As an analogy, imagine we have the following two equations:
y=2x
x=y-5
We solve these both simultaneously to get the solution x=5, y=10; we don’t say:
1. Guess x=0
2. Then y=0
3. But then x=-5
4. And then y=-10
5. Then x=-15
6. Then y=-30
etc.
Note that this doesn’t “converge,” but it doesn’t mean that there is no answer.
Same goes for game-theory: you don’t start with player 1’s strategy, then see how player 2 would react, etc. You simultaneously solve for both.
So I don’t see how you interpret game theory as being determined by iterations. Can you (or Daniel) explain it a little more?
First, game theory as descriptive theory. Second, game theory as normative theory. Third, request for counter-example.
1.
As a descriptive theory, one that predicts rather than prescribes action, I thought game theory served the same purpose as the rationality assumption in economics.
Rationality: In individual decisions, people act as if they have simple desires and always choose the best way to satisfy those desires.
Game theory: In, say, two-person zero-sum games, people act as the above plus (if the two players are you and I)
G1. I know what you are going to do before I choose what to do, and
G2. You know what I am going to do before you choose what to do.
I find it amazing that game theory works at all, since G1 and G2 seem to contradict each other. The fact that solutions exist (the equilibria solutions) I find quite beautiful.
Both rationality assumptions and game theoretic assumptions are patently ridiculous, but they seem to provide a good starting point. If we know more about the people involved, then we should incorporate that information into the analysis (as we do in behavioral finance). However, if we don’t have information, these seem to provide a good “baseline.”
It is similar to “taking a limit” in mathematics. G1 and G2 represent both of us being “infinitely clever” players; if we are both quite clever, and approximately equally clever, perhaps the assumptions give a good guess as to what we would do.
The limit will somtimes not exist; this doesn’t mean there is “no possible solution” it just means that you can’t use G1 and G2. We need a better model of behavior.
2.
As a normative theory, I don’t buy game theory unless I have some reason to believe that my opponent is actually spying (in some way) on me and I need to hide my pure strategy from the spy (and even then, I’m not sure I’d want to choose the strategies suggested by game theory).
The right thing to do is assign probabilities to my opponent’s actions, based on all my information, then use standard decision theory to make the optimal decision.
3.
“this kind of reasoning might break down when there are too many iterations involved. I’m inclined to say in those cases there are no good solutions, not that we need to go beyond decision theory to find the solution. Sometimes it really does all come down to animal spirits.”
You and Daniel seem quite a bit more willing to settle for this conclusion than I.
Daniel and you seem to be saying that there are some states of information that cannot be represented by probability distributions when significant game-theoretic issues are involved. Do you have any examples of this? All I would need is a finite set of possibilities and a reasonable state of information about those possibilities that cannot be represented by a probability distribution.
For example,
Possibilities: My opponent’s move in rock-paper-scissors.
Information: He knows exactly what I will do (not just my mixed strategy) before he chooses, and I know exactly what he will do before I choose.
There is no set of probabilities on (Rock, Paper, Scissors) that represents that state of information (right?).
However, that state of information is contradictory; it can’t really happen (can it?). I’d like to see a state of information that is reasonable, interesting to real decision-makers (no “This statement is false” stuff), and incompatible with any probability assignment on a finite set of possibilities. This would educate me greatly.
(Cross-posted from Brian’s Site - more discussion over here, looks like)
Here are two modest ideas (neither of them mine) about the importance of Nash eqm.
1) If a game has a unique nash eqm, no one can do better by doing something else. Your opponent does not have to know what you are going to do, but only have both players rationality be common knowledge a la David Lewis. I know you are rational, you know I know you are rational, I know that you know that I know etc… This is perhaps different from the gloss Brian gives: “The only argument I know of for choosing your side of a Nas equilibria in a one-shot game involves assuming, while deciding what to do, that the other guy knows what decision you will make”
2)Brian’s statement here about PD doesn’t accurately characterize a repeated game correctly:
“In the most famous game of all, Prisoner’s Dilemma, we know that the best strategy in repeated games is not to choose the equilbrium option, but instead to uphold mutual cooperation for as long as possible.” It’s true that the best strategy in a repeated game is not necessarily the same as the nash eq of the one shot game that gets repeated. This is no embarrasement (sp?) because the repeated game is a different game altogether from the stage game played as a one shot game.
Interestingly, in repeated games (infinitely repeated?) Nash eqm become much less important, because there are an infinite number of Nash eqm. More interesting is something that is evolutionarily stable, which is the point of Gintis’ book, where I learned most of these things.
peter: “More interesting is something that is evolutionarily stable, which is the point of Gintis’ book …”
the evolutionary stuff is interesting, I think (Binmore, Gintis, Skyrms come immediately to mind in a growing field). Maybe of interest: a recent symposium in Politics, Philosophy & Economics (Feb’04) on “evolution, ethics, and economics.”
(… and I’m not just saying this because I have an utterly irrelevant article in the same issue!)
À Gauche
Jeremy Alder
Amaravati
Anggarrgoon
Audhumlan Conspiracy
H.E. Baber
Philip Blosser
Paul Broderick
Matt Brown
Diana Buccafurni
Brandon Butler
Keith Burgess-Jackson
Certain Doubts
David Chalmers
Noam Chomsky
The Conservative Philosopher
Desert Landscapes
Denis Dutton
David Efird
Karl Elliott
David Estlund
Experimental Philosophy
Fake Barn County
Kai von Fintel
Russell Arben Fox
Garden of Forking Paths
Roger Gathman
Michael Green
Scott Hagaman
Helen Habermann
David Hildebrand
John Holbo
Christopher Grau
Jonathan Ichikawa
Tom Irish
Michelle Jenkins
Adam Kotsko
Barry Lam
Language Hat
Language Log
Christian Lee
Brian Leiter
Stephen Lenhart
Clayton Littlejohn
Roderick T. Long
Joshua Macy
Mad Grad
Jonathan Martin
Matthew McGrattan
Marc Moffett
Geoffrey Nunberg
Orange Philosophy
Philosophy Carnival
Philosophy, et cetera
Philosophy of Art
Douglas Portmore
Philosophy from the 617 (moribund)
Jeremy Pierce
Punishment Theory
Geoff Pynn
Timothy Quigley (moribund?)
Conor Roddy
Sappho's Breathing
Anders Schoubye
Wolfgang Schwartz
Scribo
Michael Sevel
Tom Stoneham (moribund)
Adam Swenson
Peter Suber
Eddie Thomas
Joe Ulatowski
Bruce Umbaugh
What is the name ...
Matt Weiner
Will Wilkinson
Jessica Wilson
Young Hegelian
Richard Zach
Psychology
Donyell Coleman
Deborah Frisch
Milt Rosenberg
Tom Stafford
Law
Ann Althouse
Stephen Bainbridge
Jack Balkin
Douglass A. Berman
Francesca Bignami
BlunkettWatch
Jack Bogdanski
Paul L. Caron
Conglomerate
Jeff Cooper
Disability Law
Displacement of Concepts
Wayne Eastman
Eric Fink
Victor Fleischer (on hiatus)
Peter Friedman
Michael Froomkin
Bernard Hibbitts
Walter Hutchens
InstaPundit
Andis Kaulins
Lawmeme
Edward Lee
Karl-Friedrich Lenz
Larry Lessig
Mirror of Justice
Eric Muller
Nathan Oman
Opinio Juris
John Palfrey
Ken Parish
Punishment Theory
Larry Ribstein
The Right Coast
D. Gordon Smith
Lawrence Solum
Peter Tillers
Transatlantic Assembly
Lawrence Velvel
David Wagner
Kim Weatherall
Yale Constitution Society
Tun Yin
History
Blogenspiel
Timothy Burke
Rebunk
Naomi Chana
Chapati Mystery
Cliopatria
Juan Cole
Cranky Professor
Greg Daly
James Davila
Sherman Dorn
Michael Drout
Frog in a Well
Frogs and Ravens
Early Modern Notes
Evan Garcia
George Mason History bloggers
Ghost in the Machine
Rebecca Goetz
Invisible Adjunct (inactive)
Jason Kuznicki
Konrad Mitchell Lawson
Danny Loss
Liberty and Power
Danny Loss
Ether MacAllum Stewart
Pam Mack
Heather Mathews
James Meadway
Medieval Studies
H.D. Miller
Caleb McDaniel
Marc Mulholland
Received Ideas
Renaissance Weblog
Nathaniel Robinson
Jacob Remes (moribund?)
Christopher Sheil
Red Ted
Time Travelling Is Easy
Brian Ulrich
Shana Worthen
Computers/media/communication
Lauren Andreacchi (moribund)
Eric Behrens
Joseph Bosco
Danah Boyd
David Brake
Collin Brooke
Maximilian Dornseif (moribund)
Jeff Erickson
Ed Felten
Lance Fortnow
Louise Ferguson
Anne Galloway
Jason Gallo
Josh Greenberg
Alex Halavais
Sariel Har-Peled
Tracy Kennedy
Tim Lambert
Liz Lawley
Michael O'Foghlu
Jose Luis Orihuela (moribund)
Alex Pang
Sebastian Paquet
Fernando Pereira
Pink Bunny of Battle
Ranting Professors
Jay Rosen
Ken Rufo
Douglas Rushkoff
Vika Safrin
Rob Schaap (Blogorrhoea)
Frank Schaap
Robert A. Stewart
Suresh Venkatasubramanian
Ray Trygstad
Jill Walker
Phil Windley
Siva Vaidahyanathan
Anthropology
Kerim Friedman
Alex Golub
Martijn de Koning
Nicholas Packwood
Geography
Stentor Danielson
Benjamin Heumann
Scott Whitlock
Education
Edward Bilodeau
Jenny D.
Richard Kahn
Progressive Teachers
Kelvin Thompson (defunct?)
Mark Byron
Business administration
Michael Watkins (moribund)
Literature, language, culture
Mike Arnzen
Brandon Barr
Michael Berube
The Blogora
Colin Brayton
John Bruce
Miriam Burstein
Chris Cagle
Jean Chu
Hans Coppens
Tyler Curtain
Cultural Revolution
Terry Dean
Joseph Duemer
Flaschenpost
Kathleen Fitzpatrick
Jonathan Goodwin
Rachael Groner
Alison Hale
Household Opera
Dennis Jerz
Jason Jones
Miriam Jones
Matthew Kirschenbaum
Steven Krause
Lilliputian Lilith
Catherine Liu
John Lovas
Gerald Lucas
Making Contact
Barry Mauer
Erin O'Connor
Print Culture
Clancy Ratcliff
Matthias Rip
A.G. Rud
Amardeep Singh
Steve Shaviro
Thanks ... Zombie
Vera Tobin
Chuck Tryon
University Diaries
Classics
Michael Hendry
David Meadows
Religion
AKM Adam
Ryan Overbey
Telford Work (moribund)
Library Science
Norma Bruce
Music
Kyle Gann
ionarts
Tim Rutherford-Johnson
Greg Sandow
Scott Spiegelberg
Biology/Medicine
Pradeep Atluri
Bloviator
Anthony Cox
Susan Ferrari (moribund)
Amy Greenwood
La Di Da
John M. Lynch
Charles Murtaugh (moribund)
Paul Z. Myers
Respectful of Otters
Josh Rosenau
Universal Acid
Amity Wilczek (moribund)
Theodore Wong (moribund)
Physics/Applied Physics
Trish Amuntrud
Sean Carroll
Jacques Distler
Stephen Hsu
Irascible Professor
Andrew Jaffe
Michael Nielsen
Chad Orzel
String Coffee Table
Math/Statistics
Dead Parrots
Andrew Gelman
Christopher Genovese
Moment, Linger on
Jason Rosenhouse
Vlorbik
Peter Woit
Complex Systems
Petter Holme
Luis Rocha
Cosma Shalizi
Bill Tozier
Chemistry
"Keneth Miles"
Engineering
Zack Amjal
Chris Hall
University Administration
Frank Admissions (moribund?)
Architecture/Urban development
City Comforts (urban planning)
Unfolio
Panchromatica
Earth Sciences
Our Take
Who Knows?
Bitch Ph.D.
Just Tenured
Playing School
Professor Goose
This Academic Life
Other sources of information
Arts and Letters Daily
Boston Review
Imprints
Political Theory Daily Review
Science and Technology Daily Review