Whatever it’s called, the principle is the same: a group of people can often arrive at more accurate answers and better decisions than individuals acting alone. There are many examples, from counting beans in a jar, to guessing the weight of an ox, to the Ask The Audience option in Who Wants to be a Millionaire? But all of these examples are somewhat artificial, because they involve decisions that are made in a social vacuum. Indeed, James Surowiecki, author of The Wisdom of Crowds, argued that wise crowds are ones where “people’s opinions aren’t determined by the opinions of those around them.” That rarely happens. From votes in elections, to votes on social media sites, people see what others around them are doing or intend to do. We actively seek out what others are saying, and we have a natural tendency to emulate successful and prominent individuals. So what happens to the wisdom of the crowd when the crowd talks to one another?
… You can insert your own modern case study here, but perhaps this study ends up being less about the wisdom of the crowd than a testament to the value of expertise. Maybe the real trick to exploiting the wisdom of the crowd is to recognise the most knowledgeable individuals within it.
The first part of the critique is a fair one – that when individuals communicate among each other, they can get pulled into various forms of cascades and spirals of belief that can lead them further away from the truth. But the broader argument about expertise isn’t so good. In large part, this is because Surowiecki’s book concentrates (as Yong says) on aggregation processes in which individual guesses do not influence each other. But this is far from the only way to think about how crowds can come to more intelligent assessments than individuals. Scott Page’s Diversity Trumps Expertise theorem is a case in point. Roughly speaking, Page models the wisdom of crowds as a collective search for optima across a landscape of possible solutions, where each actor can only discern part of the landscape. He finds that a group of not very ‘smart’ individuals with very different understandings of the landscape will typically beat a group of ‘smart’ experts who share a broadly similar understanding of the landscape that they confront. If this is right, Yong’s “look to the experts” approach is fundamentally wrongheaded.
Now, to be clear, Page simply assumes away many of the difficulties of aggregating knowledge. He assumes that groups are straightforwardly able to combine the different perspectives of their individual members. This abstracts out the possibility that they will get trapped in spirals. Even so, the appropriate response seems to me not to be to revert to trust in experts, but instead to think about the specific kinds of social arrangements that are going to be better, or worse, at aggregating the knowledge of diverse individuals in ways that capture its beneficial consequences while minimizing the risks of belief cascades. Cosma has a nice piece on this.
It might be thought that the theoretical explanation is rather simple, and goes (currently) under the name of “the wisdom of crowds” (Surowiecki 2004): individuals make noisy guesses, which on average are unbiased and uncorrelated, so simple averaging leads to convergence on the appropriate answer. Taken seriously, this explanation implies that our economy, our sciences and our polities manage to work despite their social organization, that science (for example) would progress much faster if scientists did not collaborate, did not read each others’ papers, etc. While every scientist feels this way occasionally, it is hard to take seriously. Clearly, there has to be an explanation for the success of social information processing other than averaging uncorrelated guesses, something which can handle, and perhaps even exploit, statistical dependence between decision makers.
Both ensemble methods and the Hong & Page results on diverse heuristics posit relatively simple forms of “social” organization, such as direct averaging, or passing a problem to the next person able to improve on the current solution. There is every reason to think, however, that the optimal form of organization will actually depend on the structure of the problem being solved. … Experience with distributed systems shows that often the hardest part of their design is ensuring coordination over time, and that failure to do so can lead to all manner of unwanted behavior, in particular to wild oscillations and/or locking into deeply undesirable configurations … Designing, or reforming, a system for computer-mediate[d] social information processing is at once a problem of distributed algorithm design and a problem of mechanism design, and [the] two modes or aspects should inform one another, as well as empirical results about what actually happens when real human beings use different systems for different tasks.
On the last point, Cosma suggests that we can treat a wide variety of different forms of online knowledge aggregation (ranging from Wikipedia through Digg, Reddit, and perhaps the blogs of yer choice) as experiments in online information processing. The claim here is not of course that these are perfectly efficient or anything like it. All of them are subject to cascade phenomena, infighting, strategic manipulation and other problems. But by comparing them, and seeing what seems to work a little better, and what a little worse, we can perhaps draw lessons that can be applied to more complex and important systems of social information processing such as governments and polities.
More broadly, a simple dictum such as ‘listen to the experts’ isn’t going to work, precisely because our most powerful methods of generating new knowledge (viz. the sciences) are not so much based on listening to individual experts, as on including these experts (and many others) in broader social systems which expose them continually to the ideas of others and vice-versa. Designing (or – perhaps better- nurturing) such systems is hard to think about and hard to do – but it has to be the way forward.
[Title of the post stolen from a forthcoming Elster/Landemore volume that talks about some of these issues]