The Wisdom of Sticks

by Daniel on August 19, 2004

Finally, with the Google IPO pricing way below expectations and with a serious arbitrage[1] showing up on the Iowa Electronic Markets, I get round to reviewing James Surowiecki’s “The Wisdom of Crowds”. I’ll save the suspense; it’s a cracking read and well worth buying. To give you an idea of the style, I’ll start this review with my own shockingly unfair parody …

In 1762, Georges-Louis LeClerc, the Compte de Buffon, threw a hundred baguettes out of his laboratory window and watched where they landed on the courtyard below. He hadn’t gone insane and he wasn’t making a statement about the French baking industry. He was carrying out a whole new experiment and revolutionising the way that we look at mathematics, information and breadsticks.

An individual baguette is a pretty dumb object. Even when made with the finest poilane flour and natural yeast, it’s unlikely that the Fields Medal will ever be carried off by a loaf of French bread. However, when you get a hundred of them together, something special happens.

This was what Buffon did; he measured the baguettes before throwing them out of the window. When they had finished bouncing around, he counted how many of them were lying across cracks in the paving stones. He then divided the length of a baguette by the number that were lying on the cracks and multiplied by 200. The number came too … 3.1415927! Yes, while his contemporaries at the Encylopedie were hard at work using conic sections and convergent sequences to establish the value of pi, Buffon had it worked out more or less accurately using yesterday’s lunch leftovers. Although no individual loaf of bread had any particular information about the value of pi, when you aggregated the little bits of information contained in the length and falling of each baguette, something amazing happens. And even more amazingly, you can repeat the same experiment with needles, cucumbers, rolling pins or even everyday American lumps of wood, and get the same result! This phenomenon is what I call “The Wisdom of Sticks”.

Well that’s somewhat unfair (although truth is stranger than fiction; you can actually estimate the value of pi by throwing sticks onto a pavement, and James has about a hundred historical anecdotes like this one). But it’s a somewhat pointed parody, because there is quite a lot of material in TWOC which, while entertaining, is pretty tangential to the real underlying point. I’d group this material into two main categories.

Party tricks: As my example of the Buffon problem suggests, statistics can throw up some funny old things. For example, you and I are playing a simple coin tossing game, best of twenty, and I win the first toss. I offer you a side-bet that you will not take the lead at any time in the next 19 tosses; if I can get odds of 3-1 or better out of you I’m a winner. There are a number of startling demonstrations (people guessing the number of jellybeans in a jar[2], for example) which look to me like carefully designed tricks with sampling theory rather than informational magic. Looking at the working paper describing the much-vaunted Hewlett Packard forecasting market, I’d be very tempted to place it into this category as well; the result that “the market did better than the official forecast” (Result 3 in the paper) is based on comparing a probability distribution of market prices with a point estimate for the official forecast and really doesn’t sum up the actual findings; that the official and market forecasts were very close to one another indeed.

Semi-attached anecdotes: Stories which are interesting, but which aren’t relevant to the main thesis of the book and have been uncomfortably shoehorned into the argument. Examples; the review of the invention of the motor car (an example of the wisdom of crowds – sure, Henry Ford invented mass production, but it was crowds of people who bought his products), the material on the Linux operating system (an informal hierarchy of people who use email and call each other ‘dude’ is a hierarchy, not a crowd). There are quite a few of these, and as you read the book, it’s worth thinking from time to time something along the lines of “but what the heck has this got to do with the wisdom of crowds?”

So when you strip away these ephemera, is there anything left? Well yes. In particular, monetary policy is better made by committees than individuals, in studies that can’t be explained away as party tricks. I’m also prepared to give a certain amount of credibility to the idea that pari-mutuel betting markets forecast horse race winners very well, although only a certain amount, because there’s equal evidence that they don’t forecast places and shows anything like as well. But the important thing to note is that TWOC does, at its core, describe a genuine phenomenon. In some circumstances, groups can outperform individuals at some kinds of decision problem. For interesting values of N, N heads are better than one. Where does the book go with this concept?

As I see it, James’ book goes in two directions as regards actual policy suggestions. As regular CT readers will be expecting, I don’t really agree with either of them, but one is very much more interesting than the other. I’ll call the first idea “market epiphenomenalism” and the second, more interesting one the “silver bullet theory”, for reasons that I hope will become clear.

Market epiphenomenalism

“Market epiphenomenalism” is my attempt at a name for what Robin Hanson thinks about markets; in my view it’s based on a misreading of Hayek‘s original work on markets as information processing entities. To recap my argument from the post linked above, Hayek certainly did believe that markets were information-aggregating entities, but this view can only be understood in the context of his whole philosophy. He certainly didn’t think that you bring your information to the market, I bring my bit of information and that the market exchange process grinds them all up and turns them into one wonderful summary statistic called the “Market price”. Markets don’t “aggregate information” in this sense; the sense in which they aggregate information is that they co-ordinate actions without the existence of any centralised information. To make this more concrete, think of the Google IPO. People were bidding in that auction, but they aren’t setting the price (the price was being set, and suddenly reduced, by the company). What they’re doing is bringing along their little bits of capital and agreeing a set of terms on which they’re prepared to part with it. The market is aggregating these little lumps of capital into one big lump that can finance something interesting, and that’s what the market is there fore. The fact that it does so at a particular price, and that this price needs to, over the long term, bear some relation to reality if the market is to survive and keep performing its function, is an epiphenomenon.

This important point from Hayek has two implications. First, that there’s much more to a market than the prices, and it’s dangerous to kid yourself that the prices, hauled out of the context of the rest of the market, can stand as information on their own at all, much less as information of the kind that you need to solve your own decision problem. For one thing, the participants in a market are likely to have loss functions for their forecasting problem which may not be the same as your own; for example, if you’re forecasting sales, you might want to make forecasts which are systematically too high (in order to set stretch targets for your salesforce) or too low (because overruns are more expensive than underruns for you). It’s highly unlikely, for example, that traders in the Policy Analysis Market would have the same attitude toward different kinds of forecasting errors as the policymakers themselves.

The second and more important point is that if Hayek is right, you can’t just will markets into existence by saying “hey I’ve designed a few state-contingent securities, let’s gamble”. The reason why markets work is that they are performing a function, and that they are bringing people together whose little bits of capital add up to the whole investment capital of the economy, or whose total short positions add up to all the grain that has been grown. Opinions are like assholes in that everyone has one[3], but unlike assholes in that it’s very easy to pluck a bunch of them out of thin air on a whim. Markets like PAM which have no real underlying need to exist, aren’t information processing entities in Hayek’s sense because they’re not co-ordinating actions, and thus attempts to use their record of closing prices as if it were information are doubly problematic. I note in passing that this isn’t a trivial point of Hayek scholarship and I don’t think any counterargument on Hayekian grounds is possible, because the point is fundamental to the Austrian critique of planning. After all, if you can construct a Policy Analysis Market out of thin air and gain all the Hayekian advantages of information processing, why couldn’t the Central Planning Committee just set up a bunch of these “shadow markets” and run communism on the basis of the output? If the Austrians were right about the planned economy, one of the corollaries has to be that toy “markets” are closer to children playing at shops than to the real thing.

But it has to be said that this is more of a hobby horse of Hanson than of James Surowiecki. At some points in the book (and more clearly, in some comments he’s made while arguing on the internet), he backs away from straightforward market idolatry and toward something much more interesting; a general theory about preference aggregation[4]. And that’s what I’d call the “silver bullet theory”.

The Silver Bullet?

The phrase “silver bullet”, in this context, comes from Fred Brooks’ book, The Mythical Man Month. In one of the articles which makes up that book, Brooks made the prediction that, as of the 1980s when most of the merely accidental problems of computer programming (bulky interfaces, lack of high-level languages, sheer novelty, etc) had been got rid of, there was now no further development at all which could promise even a single order-of-magnitude improvement in the productivity of software systems products[5]. For the last twenty years, the silver bullet prediction has looked pretty good (for about five minutes around the time of the VA Software IPO, some people thought that the Linux development model was a kind of silver bullet, but over time, I think it’s grudgingly been accepted that ten years to build a decent UNIX clone is pretty good for free, but it’s not an order of magnitude improvement.

There are a number of reasons why Brooks made his prediction but the core thesis of his book with respect to software development is that it is an “unpartitionable problem”; that software development tasks cannot be divided up among people very well and that for this reason there are fundamental cognitive constraints on the speed of software development. Under the classic model, most decision-making roles are also unpartitionable; the buck has to stop in some specific place, rather than having a nickel stop in each of twenty different places.

The thesis of TWOC, on the other hand, is that decision-making is a much more partitionable task than it is commonly supposed to be, and that it should be partitioned much more. As you can see, this comes close to suggesting that there is a “silver bullet” for a raft of organisational problems; that if we were to structure our businesses and governments so as to maximise our usage of the wisdom of crowds, we would get much better decisions for less or the same effort. Although James doesn’t actually say as much in the book, there are already some breathless futurologists out there who see a world in which nobody does normal white collar jobs any more, and instead we all sit around trading futures on our company’s internal markets. Sort of “The New Industrial State meets Trading Places.

Hmm. There is a lot of sense to the underlying idea; that people making decisions ought to discuss things with one another, and that effective communication can help groups outperform even their best members; the example of the monetary policy committees is a good one, and the book provides a pathological example of the lack of use of group discussion which helped lead to the last Space Shuttle crash. This is something which I agree with, and which is terribly important.

But I don’t like the actual “silver bullet” that James proposes. If I read TWOC correctly, it’s arguing for something a bit more radical than people talking to one another. Like anyone who’s ever said “why don’t we have a rule that you can only talk if you’re holding this bunch of keys” to try to forestall a nasty argument, the author of TWOC seems to believe that problems in human communication can be solved by simple institutional means. In the case of TWOC, the silver bullet is something called “preference aggregation”. This doesn’t have to be a market price; voting procedures are also preference aggregation, as (presumably) opinion polls could be; I understand from internet comments that James believes the best preference aggregation mechanism of all to be the hybrid market-like system used by the Tote and by pari-mutuel horse betting. Anything that takes the opinions of a group of people and distils them into a single number, with that number representing concentrated Wisdom of Crowds would suffice, though.

What’s wrong with preference aggregation?

I don’t like this idea. Specifically, it seems like too much of a free lunch. You take a bunch of views which contain some information, compress them down to a single number, and somehow come out with more information (or more usable information, which amounts to the same thing) than you started with. Granted, you can throw a basketful of bread off a roof and come up with an estimate of pi, but genuine free lunches like that are few and far between. Aside from this, I think that there are a two very serious problems with preference aggregation as a solution to organisational problems:

First, they’re black boxes. However partitionable decisions can be made, they’re not perfectly partitionable. In general, the bigger and more important the decision, the less partitionable it is. Which means that, in most important or interesting situations, aggregated preferences are going to be a decision support tool for a particular decision maker. And decision support tools shouldn’t be black boxes. I’ve spent quite a percentage of my professional life over the last ten years arguing this point and by now I think it’s won. Any improvement in decision quality that you get from a black box is more than outweighed by the fact that you can’t explain the output of the tool. You can ask questions of a committee, but you can’t ask a market why it thinks what it thinks. For most practical purposes, this is likely to mean that there would have to be an absolutely huge gain in information for it to be a good idea to move to aggregated preferences as a decision support tool.

Second, I think that the preference aggregation solution encourages pathological behaviour. Even assuming, in my view per impossibile, that careful design of the aggregation mechanism can get people to honestly estimate their information rather than “gaming the system” by misrepresenting their own views, there is a much more fundamental problem. A lot of TWOC, and for my money some of the most valuable insights in the book, is taken up with describing the conditions under which crowds perform really badly – market panics and groupthink. In order to avoid these pathological outcomes, the book is very clear that it is vitally important that you have a group of genuinely independent individuals, aggregating views which they have arrived at by thinking for themselves.

I happen to agree massively with James on the subject of it being vitally important that people think for themselves. But surely it sends entirely the wrong message to people if you’re on the one hand telling them to think for themselves, and on the other hand presenting a magical silver bullet/black box aggregation mechanism to which the eventual decision is going to be delegated? There are real-world examples of this pathological outcome at work; in particular, the Federal Reserve did not take any action to pop (or even to publicly criticise after 1996) the stock market bubble because it was not prepared to set its own judgement against that of the equity crowd. In general, I would argue that monetary authorities pay far too much attention to interest rate futures. There is a strong incentive created by all of these aggregation mechanisms for people to free-ride on the mental effort of others.

So what to make of it?

As I say, TWOC is a good book, but one that I think should be read critically rather than credulously. Crowds can give you very good answers if you know what questions to ask and if you know the correct way to interpret the answers (so could Buffon’s loaves of bread, mind). The wisdom of crowds should not be ignored, and it’s certainly a good idea to listen to what people are saying, but there is no magic solution here, and you shouldn’t delegate your own thinking to it. Even Humphrey Neill, author of “The Art of Contrary Thinking (the book in which the magazine cover indicator was first trailed) used to believe that the crowd was right most of the time. Specifically (and to be fair, in the context of stock market investment, this is close to being an analytical truth), the crowd was right during trends and wrong at turning points. I think that this captures an important part of the truth behind the “Wisdom of Crowds”; crowds are very good at answering questions, but not so good at deciding what questions to ask. Or another way to put it is that, whatever the I Ching might claim, you can’t tell the future by throwing sticks.

[1] Don’t know if it’s still there, but as of this morning, Kerry was being quoted at 0.51 in the vote share market and 0.49 in the winner-takes-all market. As I’ve noted before, this isn’t anything to do with the electoral college; despite the name the Iowa WTA market is simply a binary option written on the vote share contract with a strike of 0.5.
[2] By the way, I’ve been in a lecture hall where the jellybean experiment was tried and it failed rather embarrassingly. As the theory predicts it will, every now and then.
[3] And that in general, nobody’s all that interested in anyone else’s
[4] The main reason why James seems so keen on markets is that he believes that they aggregate the preferences of all participants rather than just the marginal buyers and sellers. I think that this view is both mathematically and economically incoherent, but I’ve snipped the discussion of “Surowiecki on market microstructure” for reasons of space and time. I think my main objection can be summed up in a joke I made some while ago in CT comments that if people who neither bought nor sold a security were contributing to the eventual price by their abstention, then it seems to follow that there are Tibetan monks who have never heard the name of Jesus Christ but who were nevertheless part of the price discovery process in the Google IPO.
[5]”Software systems products” is a very specific definition in the context of Brooks’ book, btw; whatever your counterexample, he’s probably thought of it.



Chris 08.19.04 at 5:00 am

And, of course, determining the value of pi by throwing sticks isn’t really getting a “free lunch” in the information-theory sense. The information about pi is out there, so to speak, in the natural world.


Russell L. Carter 08.19.04 at 5:29 am

Finally! A post worthy of the pro side for assassinating DD’s blog. Doesn’t Ted Porter cover all this at a more fundamental level though? And yes, Brooks is dead right still about software; the Singularity is a bit off in the future yet.


MQ 08.19.04 at 6:51 am

Surowiecki’s book seems to have been published about 5 years too late. I think we are due for a correction on the “markets solve everything” mania of the 1990s.


bad Jim 08.19.04 at 6:57 am

A footnote: Brooks wrote an article titled “No Silver Bullet” in April 1987 for Computer.

As for Linux, it’s solving a different problem than that which concerned Brooks, using methods which weren’t particularly revolutionary back in the 70’s.


bad Jim 08.19.04 at 7:41 am

Pardon my obtuseness, but doesn’t the observation that the crowd was right during trends and wrong at turning points rather decisively vitiate Surowiecki’s thesis, since the inflection points are those we most desire to discern, the trends typically being rather obvious?


Matthew2 08.19.04 at 10:29 am

This is really interesting, fundamental stuff, thanks Daniel and James.
Yes the buck has to stop somewhere, there has to be people making decisions (as anyone who has been with a group of friends trying to decide what restaurant to go to knows…). This is why we (sadly) have always needed leaders, be it kings or democratic presidents, no matter all the horrible problems they bring.


Nadeem Riaz 08.19.04 at 10:29 am

I think you’re avoiding one of the most successful preference aggregators in TWOC .. i.e. Google’s PageRank Algorithm. It seems hard to see PageRank as another statistical trick.

On another note, I’m not sure why compressed summary statistics are a bad way to make decisions? Sufficient statistics are the norm for testing hypothesis in the sciences — so this type of compression doesn’t seem a priori wrong. Also, providing “the reason” for a particular decision seems extremely naive to me. At least if you look at the literature in medical AI (an uncertain domain, with an inexact reasoning process), it becomes very clear that (a) experts often disagree (with agreement barely ever above .60 — this is for extremely difficult cases, not run of the mill stuff) and (b) experts have an extremely hard time explaining why they reached a decision. Provinding one peice of evidence in support of the decision isn’t really an explanation.

When people have multiple peices of evidence, and no firm and reliable process to combine that information , and they are working in a very uncertain domain — then I think aggregatory mechanisms can prevail.


dsquared 08.19.04 at 10:45 am

When people have multiple peices of evidence, and no firm and reliable process to combine that information , and they are working in a very uncertain domain — then I think aggregatory mechanisms can prevail

Not wanting to sound arrogant, but stock market investment is a domain of exactly this kind, and as I say above, after ten hard years, I think the struggle against black boxes has been won.

Google is an interesting one, but what problem is it actually solving. The preference aggregation problem for Google is the whole problem, not an input to a more important decision. I’d put that in the category of semi-attached anecdotes; the fact that a difficult problem (searching huge amounts of text) was solved by a small group of clever men, does not seem to me to be the wisdom of crowds.


Nadeem Riaz 08.19.04 at 11:36 am

Google solves a decision problem for its user. I want to learn about economics, but I’m not sure where to go — I ask google — and it aggregates the wisdom of the digital crowd and decides for me which site to go to. There are probably millions of pages on the web that contain the text economics, what’s important is ranking them by relevance. So, PageRank, calculates a rank for each page based on how many other pages linked to it. Each vote is weighted, depending on how important (popular) the linking site is. So if 50 really popular sites, all link to my site X — then site X probably has a good introduction to economics. IF 5 medicore sites link to site Y about economics, than site Y probably has a poor introduction.

The process google uses to pick which site to recommend first is exactly TWOC. And it seems clear that it works well, at least in this case. Now experts (i.e. Yahoo circa ~97) might be able to do better if the internet was finite, but even then the TWOC produces extremely useful information anyway.

I don’t know enough about the market and how black-boxes model it to comment on it, however, it might have many more interacting parts than the areas I’m more familiar with.(i.e. in medical AI, patients are completely independant, while obviously stocks aren’t)


RobotSlave 08.19.04 at 12:14 pm

Google’s PageRank is successful?

I’m sorry, but I’m going to need a little help in understanding the measure by which PageRank has been determined “successful.”

Personally, I hope the day never comes when I try to understand the bahavior of my inevitable teenage child with the assistance of Google. Or have you never used the word “teen” as a search term? (if not, don’t).

The failings of PageRank are fairly well documented by now. That they receive far less attention than the breathless vapourings of various Wall Street hucksters and self-appointed “technorati” is simply a matter of human nature— after all, no-one loves a pessimist, right?

The paucity of PageRank’s offerings is not immediately obvious if you compare them only to whatever other dregs you’ve managed to dredge out of the internet. It becomes apparent only when one compares Google to a decent academic or municipal library, staffed with people holding advanced degrees in the Librarial Sciences.

Of course, you’ll need to put your trousers on, and summon up the courage to interact face to face with total strangers who might be better informed than yourself if you want to test this proposition, so the potential costs might outweigh the benefits.

(PS– please do not bother to reply if you are the sort of person who is still, at this late paragraph, chuffing and wheezing audibly at the very notion of a “degree” in the “librarial sciences.” )


Tom 08.19.04 at 1:39 pm

(PS— please do not bother to reply if you are the sort of person who is still, at this late paragraph, chuffing and wheezing audibly at the very notion of a “degree” in the “librarial sciences.” )

But… But…

(sorry, cheap…)


Shai 08.19.04 at 1:40 pm

here’s my contribution to the irrelevant google point:

I would like to know what nadeem is referring to when he speaks of aggregation. you have to transform your data in some way, which is precluded if there is “no process to combine that information”. perhaps if he expanded on his medical AI example. when I think of medical AI, I think of mycin (rule based system with crude statistical incidence function), or some sort of predictive analysis using risk factors (to aid diagnosis, or in the case of an hmo, maximize profit). walmart’s inventory system predicts demand but that’s an entirely different form of preference aggregation. just wondering how both points tie together.


RobotSlave 08.19.04 at 1:57 pm


Cheap, I don’t mind.

Funny, even, I’m quite happy with.

But (perhaps unconsciously?) referencing the very institution being questioned..?

Well, Ok.

That’s pretty funny, too.

Score yourself 5 bonus points.


nnyhav 08.19.04 at 2:08 pm

So why not view preference aggregation as a result of coordinating disaggregation amongst market elements (whether players or pieces) along the lines of, say, simulated annealing? It’s not single numbers but relations amongst many such that comprise market information; the whole Hayek thang is predicated upon the value (heh) of dynamic adaptive processes to filter same: Price is epiphenomenal of allocation. (And isn’t the point of all this to improve description rather than explanation, providing a ground from which figuration can proceed?)


Zizka 08.19.04 at 2:31 pm

Some forms of groupthink can be thought of as long-shot-gamble sorts of collective action. Usually groupthink fails because gamblers usually lose, and the group members suffer or die, but occasionally a group guesses right. The solidarity of the group, on non-rational grounds, was required for success to be possible at all, but on the average such groups fail.

For example, the Mormons were one of a large number of fanatical, irrational XIX-c American religious groups. Unlike almost all of the others, they guessed right, survived, and flourished (they now have a powerful global mission).

Another example is the settlement of Polynesia by groups who went out without knowing what their destination was or even whether it existed. (My understanding is that this was done on the basis of a religious vision, perhaps during a period of overpopulation pressure.) In order for these explorations to find new land, they would have to go beyond the point of no return (since everything within that radius had already been found). The successful voyagers became the Maoris, Hawaiians, etc. The unsuccessful ones just died.

By and large, of course, we don’t want our nations to be led by experimenters of the Maori-Polynesian type, though some of the Armageddonists in the Republican Party seem to be willing to go that way.

Alvin Gouldner (“The Hellenic World”) argued that ancient Athens was a gambling society of almost that type. He called it, as I remember, “total committment rationality”.


James Surowiecki 08.19.04 at 3:27 pm

Biased as I am, I have to say that I think this is an excellent review (though, yes, I disagree with much of it), and Daniel’s right that asking “what the heck has this got to do with the wisdom of crowds?” is a good thing to ask while/if you’re reading the book. I also think that the formulation “crowds are very good at answering questions, but not so good at deciding what questions to ask” is not one I’m too far away from. But I think that crowds, under the right circumstances, really are very good at answering questions, and so we should rely on them more.

Daniel and I have thrashed out a lot of these issues before, but just a few notes:

I’ve never really understood the idea that markets only work because “they’re performing a function,” and that therefore if you take away the function, you take away their information-processing potential. Or rather, I don’t understand why we need to define “function” in as limited a sense as it’s defined here. I just go back to the racetrack. A parimutuel market’s only function is to allow people to bet money on the outcome of horse races. (Really, I suppose its function is to allow the track to walk away with its cut of the pool.) In serving that function, it does a phenomenal job of aggregating little bits of information. If you substitute business strategies, or presidential candidates for horses, I don’t see any reason why the information processing would work less well. I think, in any case, Hayek’s position in all this is complicated, because for the most part he was writing about markets for goods and services, where the problem is coordinating people’s behavior, rather than where the problem is deciphering a truth that is external to the market.

I also think Nadeem’s comments are very much on point. I don’t believe that the value of being able to get an explanation outweighs the gains you get from relying on TWOC, gains that in many cases will be, I think, “absolutely huge.” More important, I don’t really see how getting explanations helps you solve the problem of deciding which answer is the right one. Experts don’t agree with each other. If you have no method of aggregating their different opinions, how do you choose between the conflicting ones? Or, to put it differently, how do you decide what expert to listen to? As uncomfortable as Daniel is with black boxes, I’m uncomfortable with the absence of a systematic answer to this.

Finally, I think the Google example is a crucial one. Preference aggregation is not the whole problem for Google. In fact, it is an input into a more important question, namely “Of all the pages on the Internet, which is most likely to have the information this person doing the search is looking for?” And there is no doubt, at least in mind, that PageRank, which relies on TWOC, answers that question far better than any individual expert or small coterie of experts could. (RobotSlave’s comment seems to me exquisitely irrelevant: PageRank isn’t being asked where in the entire world the best information on a subject is, it’s being asked where the best information on the Net is.) I think that’s pretty powerful testimony to the wisdom of crowds. (But then I would say that.)

And dsquared, I still owe you that email. Thanks for this piece.


bobcox 08.19.04 at 5:01 pm

Google does good at producing the first important reference for a search. It is lousy at producing the second important reference, because it usually produces something almost the same as the first. If I want to know what “BMG” means, Google is a lousy place to find the result “Balfour Must Go”.


Mike Kozlowski 08.19.04 at 5:13 pm

Struggle against black boxes: I dunno, Fair Isaac seems to be doing pretty well, and they’re black-box to their clients.


Dan Hardie 08.19.04 at 5:35 pm

James Surowiecki- this is a very minor point, but I remember that on one of the Brad DeLong threads about DARPA’s proposed PAM, you and I argued for some time- 3 posts each addressed to the other- about one of my criticisms of DARPA: namely, that if DARPA correctly predicted outcome x at time t, the US Govt would, if outcome x was unfavourable to it, take action to prevent outcome x. Therefore, if the US Govt listened to DARPA and responded successfully, at time t there would be a different outcome, thus penalising those who had made the right call and rewarding those who hadn’t. You thought this completely wrong. (One example discussed was of a coup in a Middle Eastern country.)

Turning to ‘TWOC’, I find that whilst you still defend DARPA, you do now say that it could have a problem in that, if DARPA buyers predicted a coup in a Middle Eastern country, say, at a particular time, then the US Government would attempt to forestall the coup, which, were the US Govt successful, would mean -etc, etc….


son volt 08.19.04 at 6:12 pm

Taking the presidential race futures market as a example: there’s a maximum bet (IIRC), so there isn’t a lot of money to be made. Thus it is likely that some people place bets, not because they hope to win money, but because they want to tilt the market to the side they are supporting in the election.

Taking away the betting minimum would help, which in turn would suggest that Vegas’ book would be a more reliable predictor than the IEM. But if Vegas suddenly became regarded as a crystal ball on the election race, it’s not hard to imagine that campaigns, or other people with a stake in the election’s outcome, wouldn’t start to dump money into the pool to try to garner that favorable headline (especially if the votes were evenly divided, but the money favored one side).


pw 08.19.04 at 6:51 pm

The epi in epiphenomenon is pretty much the crucial thing, because that’s what governs how much real information is out there, and at what cost. To the extent that your data are being generated by people who are already doing that thing for some effectively unrelated reason, you can can a free (or cheap) lunch. But when you try to piggyback on a thin or otherwise subpar market, you get lots of noise and false signals.

Google’s PageRank is a perfect example of that (see “googlebomb”) but also of the fact that you don’t really get to decide what question you want to ask the market. (Along the same lines, I once did a nice article pointing out the the index of leading economic indicators had become a rather better predictor of federal reserve behavior than of the economy.)


Russell L. Carter 08.19.04 at 7:41 pm

“PageRank isn’t being asked where in the entire world the best information on a subject is, it’s being asked where the best information on the Net is”

What does “best” mean? Surely, there are just two possibilities: the result of the PageRank algorithm, and what you happen to be looking for. If what you happen to be looking for might be imperfectly but profitably produced by interested parties, they’ve now got sufficient incentive to indulge in pathological behavior (i.e., google bombs, which is DD’s second complaint). Unfortunately, in the simple algorithmic process of aggregating preferences there seems to be no mechanism at all available to discern integrity from base popularity. Isn’t this a problem? Google doesn’t provide a warranty for its results, it seems. This isn’t limited to Google, it appears to be a general problem.


nick 08.19.04 at 11:53 pm

I think you’re avoiding one of the most successful preference aggregators in TWOC .. i.e. Google’s PageRank Algorithm.

PageRank represents the triumph of ubiquity over quality, and while those two often intersect, they quite often do not. That is, unless you already know what you’re searching for — an accurate quotation or set of data where you already have partial data as your search query — which is a different kind of ubiquity (‘ubiquity of input’, if you like). There’s some decent research to be had on the effectiveness of PageRank based upon the degree of partial knowledge that goes into the input: hard to quantify, but worth a try.

More generally, as someone who deeply appreciates MacKay and the late Charles Kindleberger on the madness of crowds, it’s obvious that I need to read James’s book… in part because it may help me address the intriguing crowd behaviour that emerged after the South Sea Bubble, rather than the stuff that preceded it.


a different chris 08.20.04 at 12:54 am

>the crowd was right during trends and wrong at turning points.

Crowds are really good at knowing what they, the crowd, thinks. That’s PageRank, Britney Spears, what have you.

They aren’t going to do nuclear physics, or even invent it.

Garbage In, Garbage Out.


James Surowiecki 08.20.04 at 2:24 am

Google does not produce garbage out. It consistently returns excellent results incredibly fast. And it doesn’t represent the triumph of ubiquity over quality. It represents the recognition that those sites that people link to most are, more often than not, the sites of highest quality. That is, it recognizes that the crowd is wise. Googlebombing is a real problem — and I wrote about it recently — it’s not true to say that Google is doing nothing about it. And the reason people are trying so hard to manipulate Google is because its superiority as a search engine has made it so popular.

I also think one thing to keep in mind is that there’s no claim here that collective decisionmaking is always right. The argument is that it simply maximizes your chances of getting a right answer. It’s easy to talk about the pathologies and perils of group decisionmaking, and I actually spend a lot of time in the book talking about these. But we need to be equally honest about the pathologies and biases built into individual decisionmaking — biases that, for example, behavioral economists and social psychologists have amply documented. One virtue of collective decisionmaking is that if the crowd is diverse and independent, those biases cancel each other out rather than reinforcing each other. Relying on a single individual decisionmaker — like, say, a CEO — means you’re stuck with him and his flaws. And this is to say nothing about the limits on the information-processing capabilities of a single person.

By the way, Dan, I’m a little mystified by the idea that I’ve changed my tune between last summer and the time the book came out. During one of those debates on Brad DeLong’s website, I wrote this about the DARPA plan: “The one substantive problem DARPA’s plan did have was that if the U.S. acted on the information the market aggregated and stopped events from happening, it would take away the incentive for traders to be right (since making the right bet would make it more likely that the event you were betting on would never happen). I don’t think the problem was insoluble, but it was real.” Needless to say, then, I never thought this objection was “completely wrong.”

I found the page with my and Dan’s comments, by the way, using Google. It was the top-ranked page for the search. Google sifted through 4.3 billion pages in 0.12 seconds to find it. That’s a very strange definition of garbage out.


Lukas 08.20.04 at 3:11 am

They aren’t going to do nuclear physics, or even invent it.

Because physics is an “unpartitionable problem”. I doubt most theoretical physicists can beat the stock market consistently, though.


Lukas 08.20.04 at 3:38 am

In general, the bigger and more important the decision, the less partitionable it is.

You mean like how to allocate scarce resources?

I agree that black-box preference aggregation is not ideal for guiding a large organization – a company, or a country. A decision is the tip of the iceberg of the values and assumptions held by a leader, and that iceberg guides the, er, um, ship of state. Ahem.

But surely there are areas that stand to benefit from exploding decision rights and turning into a collection of Hayekian local actors. Other than just commerce.


RobotSlave 08.20.04 at 4:55 am


You’ve chosen to reject the notion that a librarian might be more helpful than Google, but then, you’ve got quite a bit invested in that opinion, so I don’t think I’m likely to convince you otherwise, no matter how stupifyingly obvious it seems to me. Though calling it “exquisitely irrelevant” probably made you feel better, the comparison does, in fact, relate rather directly to DD’s point about individual expertise vs. aggregated opinion. But I didn’t mind the snark— what got to me was your attempt to sidestep the hard part.

You chose to ignore my more substantial criticism of Google, that being that it often returns absolutely terrible results. Moreover, these awful results are the direct consequence of zealotry, obsession, lemmingism, and other “unwise” group behaviors.

No, you can’t dismiss all of the bad results as mere “googlebombing.” (Nor, for that matter, are you allowed to tell us googlebombing is “a problem,” unless you are also willing to admit that allowing any highly vocal or well-funded group with shared beliefs into a given “crowd” is also “a problem,” but I digress).

No, the lousy search results were there before anyone started gaming google, for fun or for profit, and Google has to spend more resources than they will openly admit to on manually “correcting” the “wisdom” of the crowd.

(To James, and only James: If you don’t believe me, try searching for “teen” using web search and then try again, this time using image search. Notice any difference? Do you honestly think the two results were both produced by a straightforward, unmodified PageRank?)


Russell L. Carter 08.20.04 at 5:27 am

“Google does not produce garbage out.”

“Not garbage” is an odd synonym for “best”. Seriously, I don’t think you’ve addressed the problem. Very good is not best. There are pervasive preference aggregating algorithms embedded in society which do an excellent job in the long run of determining the very best. Think peer review in the hard sciences. More generally I’d be interested in any evidence that a preference aggregating algorithm based on TWOC will do more than provide transient ok results on any problem which requires aggregating informed opinion in the presence of incentives for pathological behaviors. I think that for your general conclusion this category might be rather important.


Lukas 08.20.04 at 6:32 am

You’ve chosen to reject the notion that a librarian might be more helpful than Google

He’s not comparing them. They can’t be compared – Google doesn’t have the same published material to draw from.


Shai 08.20.04 at 7:54 am is an example of the popularity problem, although I’m glad we all agree that making out is better than mass genocide.

about the “teen” example, that’s mostly a collision of search terms problem. pagerank isn’t a very effective mechanism to solve this problem over and above basic text analysis, but cluster analysis of link structure will help search term refinement (e.g. see teoma . they call it “Subject-Specific Popularity”). It isn’t a silver bullet — your search term has to be sufficiently refined — but a human with only general knowledge (e.g. librarian who doesn’t have expertise about the topic at hand) won’t be able to help much either, beyond telling you what indexes to use, how to input your search terms. so search terms are a kind of limit on the “crowd” polled in the link analysis, etc.

i do love off topic discussions, but i’m afraid i can’t tie this to the topic at hand.


Dan Hardie 08.20.04 at 9:32 am

James S- still a minor point, but on the July 29 ’03 thread on BDL’s blog, you did see this as a problem with DARPA, but on the July 30 ’03 thread ( you don’t entertain the idea that this could be a problem, don’t cite any arguments in support of this thesis and support a lot against it: the US won’t usually be able to prevent things happening, the contracts relate mainly to trends not specific events, etc,. But no, fair enough- you were kicking ideas around and you did say some of the time that this would be a problem, so my bad.

‘Experts don’t agree with each other. If you have no method of aggregating their different opinions, how do you choose between the conflicting ones? Or, to put it differently, how do you decide what expert to listen to? As uncomfortable as Daniel is with black boxes, I’m uncomfortable with the absence of a systematic answer to this.’
This is a big, beeg question. One response would be to ask if there can be a systematic answer to it. Another response might be to suggest that successful knowledge-processing bureaucracies do, perhaps, informally evolve systems for finding the right expert to listen to: by such means as peer review, internal debate, openness to outside contributions, etc, they make it possible to assess the state of each expert’s knowledge, the rigour of his analyses, the match between those analyses and the empirical evidence.


Robin Hanson 08.20.04 at 11:52 am

Daniel wrote: “Market epiphenomenalism” is my attempt at a name for what Robin Hanson thinks about markets; in my view it’s based on a misreading of Hayek’s original work on markets as information processing entities. … this is more of a hobby horse of Hanson than of James Surowiecki. In the linked comment on Hayek, Daniel wrote: I do want to comment on the fact that a number of bloggers analysed it in terms of Hayek’s concept of tacit knowledge and markets as information-creating social entities.

For the record, I was not one of those earlier bloggers, and I do not much read, and so cannot much misread, Hayek. Hayek made great contributions in his day, but the academic conversation about information aggregation, in speculative markets and elsewhere, has moved well beyond Hayek in the half century. Daniel does correcty surmise that I agree with James Surowiecki that we can pay to create betting markets and interpret their prices as decision advice without close attention to the context of the rest of the econony. But my reasons for thinking this have little to do with Hayek, versus more recent academic research and the simple idea that anyone who notices a mistake in the price of a speculative market can make money by fixing that mistake.

Perhaps Daniel focuses on Hayek because he is one of those people who think that current academic economists usually have everything all wrong? But if we reject the Wisdom of Crowds, and the Wisdom of Experts, what do we have left – the Wisdom of Bloggers?


Jonathan W. King 08.20.04 at 3:43 pm

Footnote 1 says (about the Iowa Electronic Market contracts):
> Don’t know if it’s still there, but as of this
> morning, Kerry was being quoted at 0.51
> in the vote share market and 0.49 in the
> winner-takes-all market.

It’s gone now. Bush vote share is .492 to Kerry’s .490. WTA for Bush is .51 to Kerry’s .49. The WTA should have a higher price than the VS contract, of course.

Anyway, thanks for pointing this out; somebody might ask the IEM people if they have looked at the availability of pure arbitrages in the markets they have run over time.


James Surowiecki 08.20.04 at 3:48 pm

A couple last notes on the Google question (though I imagine this thread is dying, even if it’s very nice that Robin Hanson showed up here to comment on the discussion):

1) I think, obviously, Google is clearly the best method of finding the information you’re looking for on the Internet, and that it’s successful because of the wisdom of crowds. But this doesn’t mean that it will always return better results than any other method, nor does it mean that the pages it retrieves will be the best ones out there. It means that using Google maximizes your chances of finding the information you’re looking for (assuming it’s available on the Net).

2) The fact that Google occasionally — and if it does happen, it’s occasionally, not often — return “absolutely terrible” results is not proof that any other method is better. And in any case, I don’t really understand where these assertions that Google is often very bad at finding information come from. I use it, as many of us do, tens or (if I’m working on a piece) even hundreds of times a day. I get “bad results” a remarkably small part of the time.

3) The “teen” example is a bad one, because “teen” offers few clues as to what kind of information you’re looking for. This isn’t a flaw inherent to Google, it’s just the nature of communication. If you went up to a librarian and said “Give me information on teens,” she wouldn’t just start hunting in the stacks. She’d ask you what about teens you were interested in. The equivalent in Google is just refining your search. (By the way a “teen -porn” search will significantly improve your results.)

4) Finally, as far as Robot’s suggested librarian-Google contest, I hadn’t thought of this, but I’d be glad to sponsor a contest, kind of a John Henry thing. You pick your librarian (or librarian, if you want experts in different fields), and we can see whether they (relying on no search engine) or Google can find better information on the Internet more quickly. I would wager that Google would give you more good answers.


Aaron Swartz 08.20.04 at 4:04 pm

I came to much the same conclusion except instead of giving the book points for being well-written (although I do respect the columns), I thought it was seriously irresponsible to publish a book with that title and those claims, and then do nothing to test them or back them up.

My review: Surowiecki needs a crowd to make him wise

“Upon hearing about a book on ‘the wisdom of crowds’, I expected it to answer three questions: Are crowds wise?, When are they wise?, and Why are they wise? Sadly, this book answers none of them. …”


Jonathan W. King 08.20.04 at 4:04 pm

I wrote (about the IEM arbitrage):
> It’s gone now. Bush vote share is .492 to
> Kerry’s .490. WTA for Bush is .51 to
> Kerry’s .49.

I plead insufficient coffee. Obviously, share prices in the VS market must sum to 1.00, so right now there is another pretty tremendous buying opportunity. But the IEMs are pretty thinly and unevenly traded right around now, so good luck on cashing in.


dsquared 08.20.04 at 5:01 pm

the simple idea that anyone who notices a mistake in the price of a speculative market can make money by fixing that mistake.

This is a fairly strong claim about market microstructure and IMO only true in special cases.


dsquared 08.20.04 at 5:11 pm

Jonathan: prices in the VS market ought to sum to 1.00 minus the time value of money (important caveat)


Randolph Fritz 08.20.04 at 5:46 pm

“You pick your librarian (or librarian, if you want experts in different fields), and we can see whether they (relying on no search engine) or Google can find better information on the Internet more quickly.”

Try it in architecture, dance, or music–in fact, any of the non-verbal arts. Even in literature I think you might have problems.

Or try it in any currently politically controversial subject–the difference between the results for “global warming” and “global climate change” is quite substantial.

Google is very large and, for current problems of wide popular interest in which there is consensus, very useful. Where there is conflict it is usually possible to identify blocs of viewpoints, but it is not, without outside knowlege, possible to evaluate those viewpoints. And on subjects which are not current, or easily written about, or of wide interest, Google is of questionable value.


RobotSlave 08.20.04 at 8:06 pm


I bet if we had a contest, I’d win. Therefore, I’m right.

Solely in the interest of demonstrating the poverty of that rhetorical technique of yours, I’ll just turn your proposed contest on its head.

Let’s see who can find better books on a given topic. You get to use Google (which has access to quite a few card catalogues, in addition to other book-finding resources, such as Amazon). I, on the other hand, get to use a Librarian, with all of the tools librarians normally have at their disposal (including or not including Google, depending on what, precisely, we are trying to test, here).

Incidentally, why should an individual have to refine her search when she uses Google, James? Are all people equally skilled at refining google searches?

Google currently factors successive search refinements into its indexing algorithm; shouldn’t this aggregation, under your grand theory, obviate the need for search refinement on the part of the individual?

Shouldn’t crowds be better at refining searches than individuals?

No, of course not. You have to figure out what question to ask, naturally, and crowds just can’t do that, as has been pointed out by others here (and dismissed, wrongly, by some).

When I was in school (before Google came along and, with its crowd-wisdom, removed the need for school) I was taught that the hardest part of finding an answer is very often asking the right question. It would appear that this is still the case, and another billion PageRanked teenage poetry blogs aren’t going to make it go away.


James Surowiecki 08.20.04 at 8:24 pm

Why should an individual have to refine his or her search? Because it’s the individual who’s asking the question. Maybe it’s just me, but I don’t see how we can expect any problem-solving method to solve a problem when the problem itself is completely unspecified. If you ask a group of money managers: “How much is a company worth?” you’re not going to get a very good answer. You need to tell them what the company is, what it does, etc. to get a good answer. Similarly, if you ask Google “Get me information on teens” it’s not surprising that the answers you get aren’t that great.

As I said above, I have no problem with the formulation that groups are good at answering questions, not asking them. I wrote the book on the assumption that individuals have little trouble coming up with questions they want the answer to. My argument is that collective decision-making is often the best available way to answer them.

In any case, regardless of what you learned in school, it’s a lot harder to answer, for instance, the question: “What are the odds of winning on each of these seven horses?” than it is to ask it. And the crowd of bettors consistently answers that question better than any single individual can.


dsquared 08.20.04 at 8:28 pm

I wrote the book on the assumption that individuals have little trouble coming up with questions they want the answer to.

I refer the hon. gentleman to my post “The Correct Way to Argue With Milton Friedman”.

(I also disagree on whether racing crowds are all that, but perhaps that’s for another time)


Jonathan King 08.20.04 at 9:42 pm

dsquared writes:
> Jonathan: prices in the VS market ought to
> sum to 1.00 minus the time value of
> money (important caveat)

Yes, but I had even less caffeine in my system than I had thought. I added .492 and .49 and got .962, which was so far below 1.00 that I just blew off the time value of money part for this, a 64-day contract. Even with the correct math, though, you can calculate a risk-free return of almost 11% per year at this price.

But thanks again for the attention to detail I can’t manage on a Friday, it appears.


James Surowiecki 08.20.04 at 9:44 pm

Daniel, now you have me confused. You argued that “crowds are very good at answering questions, but not so good at deciding what questions to ask,” which I assume means that we’re better off relying on individuals to decide which questions to ask. My supposedly Friedmanesque assumption was that we can rely on individuals to come up with the questions they want answered.

What’s the real difference between those statements? And how is my assumption Friedmanesque?


James Surowiecki 08.20.04 at 9:54 pm

Jonathan, where are you getting this data? The last trade on the IEM’s VS market in Kerry was at 0.496, while the last trade in Bush was at 0.503. That sums to 0.999. What IEM calls the “average price” is 0.496 for Kerry, 0.501 for Bush, which also sums to just about 1. Contrary to what you’ve been saying, there is no internal arbitrage available in the VS market.


dsquared 08.20.04 at 11:53 pm

James: I was joking, for heavens’ sake man.

Lots of individuals are no good at coming up with questions; a small number are.


James Surowiecki 08.21.04 at 12:11 am

Sorry about that, Daniel. I figured that’s what you meant. My clash with Robotslave was just making me sensitive, I guess.


Jonathan W. King 08.22.04 at 4:03 am

James Surowiecki writes:
> Jonathan, where are you getting this data?
> The last trade on the IEM’s VS market in
> Kerry was at 0.496, while the last trade in
> Bush was at 0.503. That sums to 0.999.

I got the data when I originally posted, about size hours before my reply and your reply to my reply. At that point in time, the “last trade” numbers were what I said. The gap did close up, and it’s still gone now. A lot can happen in 6 hours, especially if somebody posts a hot tip on the ‘net. :-)

The arbitrage noted in the book review was present long enough to be noted by several people. Looking at the closing price data, it might have been available between 8/17 to 8/19 but only got very large for the one day. On 8/13, there was the opposite opportunity (Kerry led the WTA, Bush the VS).

Hope this helps.


Keith M Ellis 08.22.04 at 3:02 pm

James, you should be forgiven for being a little short in regards to roboslave as it’s obvious he/she has a chip on his/her shoulder and was snarky from the get-go.

I’m just sayin’.

Comments on this entry are closed.