As Daniel noted a while back on CT, the election markets that have opened so far aren’t efficient enough to prevent arbitrage opportunities. This point now seems to have been noticed by more mainstream commentators.
But whatever the reason, there is a significant pricing difference between these two markets [Tradesports and the IEM] — an arbitrage opportunity that you’d expect some savvy trader to take advantage of. Yes, the contracts are constructed a bit differently, but surely there’s a way to go long Bush on Iowa, short him on Tradesports, and make some surefire coin.
The pinko Money magazine attributes the inefficiency to sheer irrationality on the part of the traders in each market. If that’s right then the added evidential value of these markets is roughly the same as star charts.
{ 22 comments }
Marketeer 02.05.04 at 4:35 pm
Even if I concede there’s no added evidential value–
1. arbitrage opportunities are not the norm, rather remarkable convergence is
2. why not take the odds as being the best summary available of the evidence at hand?
3. one sure won’t get rich making a habit of trying to beat the odds with one’s superior intellectual insight
4. As I recall, Dean shares never went much above 60–not crazy
5. “The market’s” judgment was no worse than that of a lot of highly informed observers highly motivated to get it right
6. The really smart–or just lucky–“investors” were those who bought Kerry at 15 or whatever. How many did? Why did they do it?
7. Does Lexis-Nexus dredge up anyone who went into before Dean’s fall began–roughly two weeks before Iowa–prophesying the fall and Kerry’s rise? What was this clever fellow’s reasoning?
8. That is to say, it remains true that
a. for sports events and political outcomes, the market is the best single predictor there is
b. like any predictor, it’s fallible–if you’re smart enough, you can get rich.
Bill Carone 02.05.04 at 4:37 pm
Other than the “sheer irrationality” of everyone on earth, including you and Daniel (only because you aren’t taking the certain profit; your posts are great), are there other reasons that no one is taking the profit?
Are there limits on how much you can bet? Have there been problems getting paid? Are there tax implications? Do you have to pay now for payout years from now (don’t thinks so, given your and Daniel’s description)? Are there ethical implications? Are there legal implications?
You probably shouldn’t jump directly to irrationality as an explanation; it works too well (Why am I standing on my head in the corner holding a matchbook between my toes? No reason, just being irrational :-)
marketeer appalled 02.05.04 at 4:42 pm
And as for the star charts, it is absolutely appalling that the Times would give ten column inches of best soapbox there is over to such utterly worthless tripe. If only it had been a joke of some sort.
The markets are the best indicator available, the stars indeed provide no value whatever. Those who guide their actions by the odds are wise. Those who guide them by the stars are foolish.
Brian Weatherson 02.05.04 at 5:10 pm
On Bill’s question, I think there are some limits to the bets, and there are transaction costs (some of them financial, but also large amounts of time) so the net gain vs net time equation is not great. At least that’s my excuse for not pumping these guys dry.
On Marketeer’s point, I was trying to be careful to say these markets weren’t entirely useless summaries of the data. I think they are, in a way. Disclosure: I frequently check the betting lines for in progress cricket games to get a sense of who is winning. They are a useful quick-and-dirty summary of the situation. I’m not sure if this makes me slightly more pro-market than Daniel.
What I deny is that this type of market info has any extra value over and above the publicly available information. I think there’s nothing a moderately informed person could learn from these markets that they couldn’t learn from sitting down with some public polling numbers for 5 minutes. Sitting down with private polling numbers for 10 minutes would probably tell you much much more. As a corollary, I am strongly opposed to the use of such mechanisms in important decision-making procedures, as the terror markets were supposed to do. I think just looking at prices like this tells you less than a quick peek at the actual underlying evidence, and so I think decision-makers should be focussed entirely on the underlying evidence, and not at all on the market-based summaries of the evidence.
dsquared 02.05.04 at 5:25 pm
I occasionally scalp the tradesports prices if there are glaring inconsistencies. I don’t have an account at IEM because it seems too inconvenient to set one up.
dsquared 02.05.04 at 6:31 pm
Btw, Money magazine is being a little on the pig-f’kng ignorant side when it writes:
Iowa got its start at a University, after all, so it makes sense that it would attract more bleeding-heart commie pinko academic types.
Apparently the writers for Money magazine haven’t heard of such a thing as a Business School, but its readers might have.
Bill Carone 02.05.04 at 7:01 pm
“Sitting down with private polling numbers for 10 minutes would probably tell you much much more.”
True for elections, where people are asked “Who are you going to vote for?” Would the same thing be true if the question was “Who do you think is going to win?” Or am I confused, and that is the polling question you are using?
I would expect a poll on the first question to be quite useful, a poll on the second less so, since you are lumping together people who have almost no information about who is going to win with other people who have lots of information.
“As a corollary, I am strongly opposed to the use of such mechanisms in important decision-making procedures, as the terror markets were supposed to do”
I don’t think this follows as a corollary.
The idea, I thought, was that people with no information bet small amounts randomly and people with lots of information bet big amounts non-randomly.
So we have people named A, B, and C. A and B are ignorant and assign a 51% probability that candidate Z will win. C is informed and assigns a 90% probability for candidate Y to win.
A poll gives 66% for candidate Z. A market would heavily favor candidate Y, since A and B aren’t going to want to bet as much money as C.
This is completely oversimplified; what am I missing?
James Surowiecki 02.05.04 at 7:14 pm
Brian, I think one of the reasons why market-like devices are interesting is that we don’t always know what the “actual underlying evidence” is or is not. Assuming that decision-makers always have a clear picture of what’s relevant seems unrealistic to me.
Bill, there’s actually very little evidence that informed people bet more or invest more than uninformed people. That’s because ignorant people don’t know they’re ignorant. They think they have the right answer just as much as informed people do. In the event, this means you generally end up in the same place as with your example. If A and B are ignorant, but the distribution is random, it’s just as likely that A will bet on Z with probability 90% and B will bet on Y with probability 90%. Add in C’s good information on Y, and the “group” ends up with a reasonably smart prediction.
bill carone 02.05.04 at 7:14 pm
“I think just looking at prices like this tells you less than a quick peek at the actual underlying evidence, and so I think decision-makers should be focussed entirely on the underlying evidence, and not at all on the market-based summaries of the evidence.”
You can’t make it all the way to that conclusion; you need to figure out the costs of finding and gathering the “underlying evidence” and compare it with the costs of using market prices. Both give imperfect information, and both cost, so it isn’t obvious (to me at least) which is better.
When “underlying evidence” is difficult, time-consuming, or expensive to get, a market system can give valuable information. This works in the economy all the time, no? Copper supply decreases, prices go up, you use less copper; you don’t care why the supply decreased, you get the right answer just by looking at the prices. It would be much more costly to say “I don’t trust the prices, people are irrational, making decisions based on animal spirits. I’ll go out and gather all the underlying evidence about the supply of copper, alternate uses of other material, etc. every day.”
If you have easy access to the “underlying evidence,” like we do with elections and polls, then I agree with you, a market probably won’t help. But it is quite difficult to get and integrate information across even a medium-sized organization. Perhaps a market would, despite its clear drawbacks, give some added value of information.
Bill Carone 02.05.04 at 7:34 pm
“Bill, there’s actually very little evidence that informed people bet more or invest more than uninformed people.”
How about this: people who assign high probabilities will bet more than people who assign low probabilities.
“That’s because ignorant people don’t know they’re ignorant. They think they have the right answer just as much as informed people do.”
I agree than overconfidence is the norm. However, once you start betting substantial amounts of your own money, it decreases (as do most other biases).
Here is an example. I teach probabilistic analysis. Part of my homework and tests are multiple-choice, but a little different. Instead of marking the choice you think is most likely, you assign probabilities to each of the four choices.
Your score is then based on the probability you assigned to the right answer (the scoring rule I use is based on a logarithmic proper scoring rule, where your score = 100% + log(p)/log(4), where p is the probability you put on the right answer. This is “proper” because it gives good incentives to assign probabilities based on your information).
At the beginning, people think they are 97% sure of answers. Then they get hit with some p=1% (score = -2.32 out of 1) scores, and figure out they are being overconfident; they think they know more than they do. As the term progresses, they show less and less overconfidence, since they actually are hit with the consequences of their overconfidence.
So, people who participate in these markets will become less and less overconfident. There might be crazy people out there who lose and lose and never learn, but they tend to balance out.
If someone is truly uninformed, they will assign equal probabilities to each candidate, and bet small amounts.
Informed people might still assign equal probabilities; perhaps, based on their substantial research, each candidate has a equal chance. They will then also bet small amounts. The market will then show equal probabilities, and a small amount of money bet.
Informed people might have information that points more to one candidate than another. Then, they will assign higher probabilities and bet higher amounts. The market will show these higher probabilities, as the large bets have more weight than the small bets of the uninformed.
What if there are many informed people, but they have different information, and point in different directions? Then, the market shows equal probabilities with a large amount of money bet.
Where have I gone wrong? I always do :-)
James Surowiecki 02.05.04 at 8:36 pm
Bill, I’m not sure you’ve gone wrong, exactly. But it’s not clear to me why you think people who know they’re uninformed and therefore assign equal probabilities to different outcomes would bet at all. I can see that people would (and do) do this in casinos all the time. But trading futures contracts on IEM or Tradesports involves a lot more work and the payoff is too distant to provide the adrenalin rush that gambling does. So I would assume that most people in these markets are in them because they think they have better judgment than everyone else. Your students had to answer the questions to pass. No one has to bet.
I think there probably is some learning over time. But that learning is erratic, because even dumb people are going to be right part of the time. Take my initial example, where one uninformed person (betting for the first time, so she hasn’t gotten any negative feedback) assigns a 90% probability to outcome Y and the other uninformed person assigns a 90% probability to outcome Z. When Z happens, the second person is going to think he’s smart, even though he’s lucky. And even if the next time, the odds catch up with him, he can write that off to bad luck (and if he’s lucky twice in a row, it’ll take an incredible number of failures to convince them that he should stop betting).
Just as important, there is almost no reason to think that there is a select group of people who are informed on every matter that a market is dealing with. Just because I have good information on Event A doesn’t mean I know anything about Event B. But that’s not the way most investors work. People don’t, for instance, only invest in the stocks they know the most about (or else many more mutual-fund managers would outperform the indices than do). Here again feedback wouldn’t necessarily make people smarter.
Markets aren’t closed, either. There are always new people entering as old ones exit, and those new people are overconfident even when uninformed, and convinced that they know things that they don’t.
As I said, though, I don’t think any of this means that markets don’t work. If you’re right, which you very well may be, and informed people bet more heavily than uninformed people, then the weighted average will be accurate. If informed people and uninformed people bet the same amount, the weighted average will just be the average, and it will still be pretty accurate (as in the first example I offered). It’s only if uninformed people bet more than informed people that a weighted average will throw things off. And I don’t think that’s very likely.
Brian Weatherson 02.06.04 at 2:50 am
If someone is truly uninformed, they will assign equal probabilities to each candidate, and bet small amounts.
If they assigned equal probabilities to each candidate, they should be betting large amounts, because they will regard bets that pay out if Dennis Kucinich or Al Sharpton win the Presidency as massively underpriced. It will take a long time until the market ceases to be willing to sell bets on those guys for 15 cents or so a pop. (I’m using the IEM model here where a bet on a candidate pays $1 if they win.)
I also think it’s a very dangerous assumption to think that the informed and the uninformed can self-categorise. Lots of uninformed people think they are perfectly well informed. (I’m in that category way too often.) And a few well informed people think they are uninformed. Any model that starts with the assumption that the uninformed know they are uninformed is likely to lead to mispredictions.
Bill Carone 02.06.04 at 5:29 am
“If they assigned equal probabilities to each candidate, they should be betting large amounts, because they will regard bets that pay out if Dennis Kucinich or Al Sharpton …”
I was referring to maximally uninformed people, people who only know the names and _nothing else_ about the candidates; it only takes a small amount of info to lower the Sharpton’s probability :-)
I also wasn’t very clear about the kind of market I was dealing with; I am not an expert in how these markets are set up. The one in my head sort of looked like this:
A wealthy philanthropist is selling, for $1 each, certificates that pay off $2 if candidate X wins and nothing if he/she loses. He has two kinds of certificates, one for each of two candidates. You can buy any number of either type of certificate.
People with probabilities close to 50-50 will buy some certificates. People with probabilities far from 50-50 will buy lots.
This market doesn’t necessarily clear (that’s why the philanthropist is involved), and I’m not quite sure why I thought about it this way, but there you go :-).
“Lots of uninformed people think they are perfectly well informed”
As I said above, overconfidence is the norm until people are faced with the consequences i.e. losing substantial amounts of money. Then, overconfidence (and all the other biases) start fading, and they start to “self-categorize.”
For example, in all the probability and decision analysis courses I teach, my students start the course overconfident; they do not leave overconfident, since my grading system punishes overconfidence. In other words, it pays to become very clear in your head exactly what you know and what you don’t know.
The same thing would happen in betting markets; people might start out overconfident, but with their own money on the line, they would shape up. They would either lower their bets so that the consequences are too small to worry about, or they will learn to be less overconfident (and less biased in general).
dsquared 02.06.04 at 7:02 am
As I said above, overconfidence is the norm until people are faced with the consequences i.e. losing substantial amounts of money. Then, overconfidence (and all the other biases) start fading, and they start to “self-categorize.”
The lesson of the behavioral finance literature is that this doesn’t happen.
Brian Weatherson 02.06.04 at 2:00 pm
And even if it were, miraculously, to happen in this case, you’d have to have (a) frequent enough responses that people culd really notice their substantial losses and (b) people sensitive enough to gains and losses that they didn’t do things like avoid sure gain. Election markets with arbitrage opportunities don’t satisfy either condition, and it’s not clear why terror markets should do any better.
James Surowiecki 02.06.04 at 2:05 pm
Brian wrote: “Lots of uninformed people think they are perfectly well informed. (I’m in that category way too often.) And a few well informed people think they are uninformed. Any model that starts with the assumption that the uninformed know they are uninformed is likely to lead to mispredictions.”
I think this is exactly right. And Bill, just to repeat myself, all the “uninformed people” won’t be proven wrong right away. In fact, if they’re betting randomly (and choosing between two alternatives), half of them will be proven right the first time, and will therefore be convinced they’re geniuses.
There’s some evidence in the behavioral finance literature that trading experience does away with some behavioral quirks — the endowment effect, for instance, appears to be much weaker among professional traders — but, as Daniel says, overconfidence does not disappear. Of course, when people go bankrupt, they disappear, but that can take quite a while. And in any case, in the meantime new, equally uninformed people appear.
Bill Carone 02.06.04 at 5:41 pm
Well, it’s possible I’m just overconfident and uninformed about all this :-)
“The lesson of the behavioral finance literature is that this doesn’t happen.”
Any quick source I can check on this? Does overconfidence/bias lessen or stay the same over time? What are the consequences of overconfidence? Do they offer many simple deals all with large consequences, or few complicated deals, all with small consequences?
Most studies I’ve read have dealt with small sums of money, and offer few deals. I wouldn’t be surprised that people do silly, biased things when the consequences are so low.
I believe the bias called “racism” worked the same way right before the Jim Crow laws: people said blacks couldn’t work as smart as whites, but talk is cheap; if you were running a competitive business, you knew better.
When I do demonstrations that do not have real consequences, I get much worse performance from the students (e.g. one student claimed he was only 99% sure that the distance from New York to Moscow was under 1 billion miles (five round trips to the sun and back)).
“And even if it were, miraculously, to happen in this case”
I don’t see it as much of a miracle. I give the example of all of my classes that do “self-categorize”. This works across institutions; at Stanford, people end up putting higher (well-calibrated) probabilities than at other institutions where the students aren’t as prepared for the course.
The consequences of failing my course usually are larger than those in the experiments I’ve read about (things like losing a scholarship, spending an extra year in school, etc.). So people quickly come to understand that they aren’t really 99% sure of the answer, or at least aren’t willing to bet according to that probability.
Add a few zeros to the payouts on the experiments you read, and think about what would happen. Make the payouts more than “small potatoes.” I recently saw Alan Alda do an “ultimatum” experiment. He was dealing with sums less than $10; of course he was doing silly things. Add six or seven more zeros, and I bet he’d sit up and take notice. Of course, those experiments can’t be run.
“(b) people sensitive enough to gains and losses that they didn’t do things like avoid sure gain. Election markets with arbitrage opportunities don’t satisfy either condition”
As I said early in this thread, the arbitrage opportunities in the election market don’t prove anything except e.g. that the transaction costs are too high compared with the low possible payoffs. I haven’t seen any arguments that show that it is due to irrationality or “animal spirits.”
If it were easy to use these markets, people like Daniel would “scalp” them until the prices equalized; of course, it wouldn’t be “scalping” with its negative connotations, it would be providing a service: finding buyers for sellers.
“(a) frequent enough responses that people culd really notice their substantial losses”
You seem to contradict yourself here.
You say that people are going to think they know everything. In the market, then, they would make lots and lots of large bets, no? After all, they know so much better than everyone else, why not take lots of everyone’s money? This would give tons of feedback, no?
If the payouts are far in the future, then instead of payouts we have to look at the value of their “portfolio” of bets, and assume that they don’t necessarily want to wait for the ultimate payout, but might want to sell their bets to other people. So, instead of learning from losing their bets, they will learn from losing value in their portfolio.
“In fact, if they’re betting randomly (and choosing between two alternatives), half of them will be proven right the first time, and will therefore be convinced they’re geniuses”
Not if they make 100 large bets at the same time, which an overconfident person would do, if not immediately, then eventually if they remained overconfident (after all, I’d rather have my lottery money now than later). They would be almost certain to lose big and be disabused of the “genius” notion.
“In fact, if they’re betting randomly”
This is another issue I haven’t heard addressed. People who bet randomly won’t affect the outcome of the market that much, since they will tend to cancel out, leaving the informed people to affect the market price.
Problems would happen if the uninformed people had a common bias, and so didn’t vote randomly; overconfidence alone won’t do it. Do you have examples of common biases that would prevent the uninformed people from cancelling out? In the election market, for example?
Of course, if some speculator (i.e. not an expert in the content of the market, but an expert in the market itself) figured out that this was happening, he could make money by putting the price right again, so maybe this isn’t a big deal.
James Surowiecki 02.06.04 at 6:20 pm
Bill, I did address the issue of people’s bets canceling each other out. In fact, it’s the reason why I think that even though uninformed people are as likely to bet heavily as informed people, it doesn’t make the market dumb. That’s why I wrote: “If A and B are ignorant, but the distribution is random, it’s just as likely that A will bet on Z with probability 90% and B will bet on Y with probability 90%. Add in C’s good information on Y, and the “group†ends up with a reasonably smart prediction.”
As you said, problems only happen when people have a common bias, so that their mistakes are all in the same direction.
Bill Carone 02.06.04 at 6:51 pm
“I did address the issue of people’s bets canceling each other out …”
Oops; I guess I’m illiterate as well as uninformed and overconfident :-).
Do you think it is likely that a common bias will occur?
James Surowiecki 02.06.04 at 8:43 pm
That’s the question I always like to take a pass on — it’s much easier for me to say: “As long as people’s biases aren’t correlated, the group’s judgment will be good” rather than have to take a stand on how often biases are correlated. I think, to make the obvious point, that it depends on the question. In general, if the group is big enough, I suspect that most of the time there will not be a common bias. (Certainly there’s no theoretical or empirical evidence that suggests consistent similarities among the “uninformed.”) On the other hand, there are times — the bubble of the late 1990s most obviously — when there clearly is, and that’s when everything goes haywire. And I also think that things like CNBC, etc., which magnify echo-chamber effects have made correlated biases more likely.
Anthony 02.06.04 at 9:15 pm
James, the issue is that the “election markets” don’t last long enough, and don’t involve enough money. In the market for copper, people who have guessed wrong, or who have bad information, have lost money and will eventually be unable to partcipate meaningfully in the market for copper, while people who have consistently guessed right or had good information will make money, which they will then use to make larger “bets” in the market in the future.
If Tradesports (or the IEM) required a minimum bet of $10,000, you’d probably see a much more informed price.
James Surowiecki 02.06.04 at 11:55 pm
Anthony, at least in past presidential elections the IEM’s price actually has been a consistently good predictor of outcomes. Now, whether this means the price is “informed” is something people disagree about (John Quiggin and Daniel have both had posts explaining why they think it’s not), but the IEM has certainly not been dumb, even with small stakes. I agree that larger stakes probably help, but I’m not convinced that they’re essential.
To your specific point, over a typical decade something like 80-90% of institutional money managers underperform the market. In other words, most money managers are not “informed” in the way that we’ve been using the word. They nonetheless control immense sums of capital and are participating meaningfully in the market. Markets don’t work because they winnow the uninformed out and leave the informed trading. They work because, as long as biases aren’t correlated, even a group made up of informed and uninformed traders can still produce a collective forecast superior to the individual forecasts of the vast majority of the people in the group.
Comments on this entry are closed.