Collective Wisdom

by Henry Farrell on September 20, 2011

Via “Kevin Drum”:http://motherjones.com/kevin-drum/2011/09/wisdom-ignoring-crowds, a piece by Ed Yong “which argues”:http://blogs.discovermagazine.com/notrocketscience/2011/09/13/knowledgeable-individuals-protect-the-wisdom-of-crowds/

bq. Whatever it’s called, the principle is the same: a group of people can often arrive at more accurate answers and better decisions than individuals acting alone. There are many examples, from counting beans in a jar, to guessing the weight of an ox, to the Ask The Audience option in Who Wants to be a Millionaire? But all of these examples are somewhat artificial, because they involve decisions that are made in a social vacuum. Indeed, James Surowiecki, author of The Wisdom of Crowds, argued that wise crowds are ones where “people’s opinions aren’t determined by the opinions of those around them.” That rarely happens. From votes in elections, to votes on social media sites, people see what others around them are doing or intend to do. We actively seek out what others are saying, and we have a natural tendency to emulate successful and prominent individuals. So what happens to the wisdom of the crowd when the crowd talks to one another?

bq. … You can insert your own modern case study here, but perhaps this study ends up being less about the wisdom of the crowd than a testament to the value of expertise. Maybe the real trick to exploiting the wisdom of the crowd is to recognise the most knowledgeable individuals within it.

The first part of the critique is a fair one – that when individuals communicate among each other, they can get pulled into various forms of cascades and spirals of belief that can lead them further away from the truth. But the broader argument about expertise isn’t so good. In large part, this is because Surowiecki’s book concentrates (as Yong says) on aggregation processes in which individual guesses do not influence each other. But this is far from the only way to think about how crowds can come to more intelligent assessments than individuals. Scott Page’s “Diversity Trumps Expertise”:https://crookedtimber.org/2007/06/27/review-scott-e-page-the-difference/ theorem is a case in point. Roughly speaking, Page models the wisdom of crowds as a collective search for optima across a landscape of possible solutions, where each actor can only discern part of the landscape. He finds that a group of not very ‘smart’ individuals with very different understandings of the landscape will typically beat a group of ‘smart’ experts who share a broadly similar understanding of the landscape that they confront. If this is right, Yong’s “look to the experts” approach is fundamentally wrongheaded.

Now, to be clear, Page simply assumes away many of the difficulties of aggregating knowledge. He assumes that groups are straightforwardly able to combine the different perspectives of their individual members. This abstracts out the possibility that they will get trapped in spirals. Even so, the appropriate response seems to me not to be to revert to trust in experts, but instead to think about the specific kinds of social arrangements that are going to be better, or worse, at aggregating the knowledge of diverse individuals in ways that capture its beneficial consequences while minimizing the risks of belief cascades. “Cosma”:http://arxiv.org/abs/0710.4911 has a nice piece on this.

bq. It might be thought that the theoretical explanation is rather simple, and goes (currently) under the name of “the wisdom of crowds” (Surowiecki 2004): individuals make noisy guesses, which on average are unbiased and uncorrelated, so simple averaging leads to convergence on the appropriate answer. Taken seriously, this explanation implies that our economy, our sciences and our polities manage to work _despite_ their social organization, that science (for example) would progress much faster if scientists did not collaborate, did not read each others’ papers, etc. While every scientist feels this way occasionally, it is hard to take seriously. Clearly, there has to be an explanation for the success of social information processing other than averaging uncorrelated guesses, something which can handle, and perhaps even exploit, statistical dependence between decision makers.

bq. Both ensemble methods and the Hong & Page results on diverse heuristics posit relatively simple forms of “social” organization, such as direct averaging, or passing a problem to the next person able to improve on the current solution. There is every reason to think, however, that the optimal form of organization will actually depend on the structure of the problem being solved. … Experience with distributed systems shows that often the hardest part of their design is ensuring coordination over time, and that failure to do so can lead to all manner of unwanted behavior, in particular to wild oscillations and/or locking into deeply undesirable configurations … Designing, or reforming, a system for computer-mediate[d] social information processing is at once a problem of distributed algorithm design and a problem of mechanism design, and [the] two modes or aspects should inform one another, as well as empirical results about what actually happens when real human beings use different systems for different tasks.

On the last point, Cosma suggests that we can treat a wide variety of different forms of online knowledge aggregation (ranging from Wikipedia through Digg, Reddit, and perhaps the blogs of yer choice) as experiments in online information processing. The claim here is not of course that these are perfectly efficient or anything like it. All of them are subject to cascade phenomena, infighting, strategic manipulation and other problems. But by comparing them, and seeing what seems to work a little better, and what a little worse, we can perhaps draw lessons that can be applied to more complex and important systems of social information processing such as governments and polities.

More broadly, a simple dictum such as ‘listen to the experts’ isn’t going to work, precisely because our most powerful methods of generating new knowledge (viz. the sciences) are not so much based on listening to individual experts, as on including these experts (and many others) in broader social systems which expose them continually to the ideas of others and vice-versa. Designing (or – perhaps better- nurturing) such systems is hard to think about and hard to do – but it has to be the way forward.

[Title of the post stolen from a forthcoming Elster/Landemore volume that talks about some of these issues]

{ 64 comments }

1

Adam 09.20.11 at 2:15 pm

“individuals make noisy guesses, which on average are unbiased and uncorrelated”

Isn’t this just assuming the conclusion? Why would you ever make these assumptions?

Also – isn’t there lots of evidence that a group of people will converge on the views of those group members who assert their views most forcefully and unbendingly?

2

Sev 09.20.11 at 2:22 pm

“Also – isn’t there lots of evidence that a group of people will converge on the views of those group members who assert their views most forcefully and unbendingly?”

The dynamic of the lynch mob.

3

soc_sci_anon 09.20.11 at 2:32 pm

Adam @1: There’s certainly a lot of evidence that a task-oriented group will converge on the views of the highest-status actor (typically white men), regardless of whether he/she is the most knowledgable or not. This person can be the loudest and most forceful, too, but the driving factor is social status, not volume.

As an aside, it always amuses me when behavioral economics reinvents 1950s social psychology. And then it depresses me.

4

OCS 09.20.11 at 2:40 pm

I wrote about this awhile ago. What I think it boils down to — the wisdom of crowds works for simple problems (such as the number of jelly beans in a jar) if the guesses are independent. Then you’re pretty much just eliminating the outliers. For more difficult questions, such as economic policy or food safety measures or retirement planning or just about any real-world problem, I don’t see that it has much use. Add a social component and the wisdom of crowds isn’t even good for simple problems.

How we organize groups of people to come to useful conclusions is a different problem (I’d say the major problem of governments, businesses, high school prom committees, or just about any group that has to make a decision). But it doesn’t have a lot to do with the wisdom of crowds.

5

William Timberman 09.20.11 at 3:11 pm

Sometimes experts try to solve the wrong problem. Two examples spring to mind:

1.) Engineers called in to respond to complaints about a slow bank of elevators in an office tower didn’t solve the problem, but a non-engineer in their office, hearing them talking about it, did. He correctly deduced that the complaints were the problem, not the functioning of the elevators, and saved the engineers’ reputations by getting them to install full-length mirrors on the walls of the elevator alcove.

2) An article in NPR yesterday describes how turning a thorny problem in enzyme structure into a computer game led to a solution which had escaped molecular biologists for more than a decade. The gamers knew nothing about the science, they were solving a much simpler puzzle: The game challenges players to manipulate the structure of the protein so that it reaches the lowest energy state, which earns them the highest score. The function of the protein changes with the shape it takes.

Often, I think, the wisdom of crowds consists in the diversity of approaches to any given problem which it can contain. Many of these approaches will turn out to be silly, and many will follow already discarded conventional wisdom of some sort, but a few may prove to be just the ticket. As a corollary, this is probably why efficient propaganda is not a good idea, and democracy is.

6

straightwood 09.20.11 at 3:54 pm

Whatever the underlying merits of emergent networked knowledge pooling mechanisms, the history of Internet-based knowledge engineering will be determined mainly by a rear-guard action fought by the members of the old orders of credentialization, professionalization, and academic specialization. It is these experts who have the most to lose from the introduction of radically streamlined methods of arriving at the truth. Accordingly, they will seek to retard the introduction of new methods, using every excuse that false conservatism and learned sophistry can supply.

7

Paul Orwin 09.20.11 at 3:54 pm

@William Timberman: That NPR story was the first thing I thought of reading this as well. It seems to me that the “Wisdom of Crowds” absolutely depends on expertise in designing the question. Per the protein folding game, a poorly designed game would have created the exact same outcome for the gamers (i.e. a group of gamers would have “won”) but the exact wrong outcome for the scientists interested in protein structure (ie, an incorrect model for the protease structure). It seems that the two key components are 1) knowing what questions can be answered through this sort of model, and 2) knowing how to ask the question to get an unbiased, uncorrelated set of answers to sort. It is an interesting and I think complementary approach to expertise based, hypothesis driven research. The analogy that strikes me (as a total novice) is comparing straightforward computer programming (where you set up a list of commands to transform input into output) to parallel computing or even the fabled quantum computing.

8

AcademicLurker 09.20.11 at 4:22 pm

@7

Exactly. The group that designed FoldIt is one of the leading protein biophysics groups in the world. A lot of expert knowledge about protein structure & energetics was built into the game in terms of the basic setup of the problem, the allowed moves, the scoring function & etc.

What this shows is that crowdsourcing can be effective if the way that the crowd interacts with the problem and each other is appropriately structured for the task at hand.

Anyone who takes “expert knowledge isn’t important” as the lesson of the NPR story is getting it very wrong.

9

Scott Martens 09.20.11 at 4:26 pm

I’m not sure anything described here falls outside of standard learning theory. I would think a good explanation could be found in some variation of Holland’s schema theorem. Diversity and good heuristics can trump expertise, sometimes, given enough diversity and time and adequate heuristics.

Besides, the whole reason that “ask the audience” works on Who Wants to Be A Millionaire is because while guesses may be random, the people who know the answer will distort the average. It is the non-Gaussian outcome of asking many random people that makes it work. Not that people are on the average right, but that ignorance can be credibly modeled as a Gaussian distribution in many realistic contexts, while knowledge shows a clear bias. I’m sure Cosma has a whole bibliography of citations on very unproblematic-seeming problems where ignorance cannot be modeled as a Gaussian distribution, at least if we consider modeling ignorance to be what a good Bayesian prior should do.

I would think Holland’s theorem is a much better basis for most of crowd-sourcing’s successes than assumptions of Gaussian-distributed error in amateurs’ guesses. After all, if we asked millions of Americans in 2003 how many nuclear weapons Iraq had, the average would have been radically incorrect. If we’d asked them to bet money on it, the experts would have bet a lot that Iraq had none, and the amateurs would have bet less to reflect their lack of confidence. The market average might have been more informative.

10

OCS 09.20.11 at 4:26 pm

@5
It occurs to me that the “wisdom of crowds” probably comes in a strong version and a weak version. The strong version says, ask a bunch of people a question, average their answers, and you’ll be closer to correct than the experts. The weak version says that there is a way to harness the diversity of knowledge in a crowd to come up with the right answer.

William Timberman’s post about the enzyme game is an example of the weak version. It’s not that all the games averaged together gave the correct answer. It’s that, given the right motivation and tools, at least some people in a crowd of non-experts eventually came up with the right solution.

I think the weak version of the wisdom of crowds, combined with the Internet, provides tremendous opportunity to harness a lot of unused expertise and intellectual resources.

The strong version suggests that a bunch of people who don’t know anything about a subject will always, on average, be right. That’s always sounded a little like magic to me. And in fact, there are so many limitations on the how and when the strong version works that I don’t think it’s useful for much.

11

William Timberman 09.20.11 at 4:58 pm

@ Paul Orwin and AcademicLurker

My intent wasn’t expressed clearly enough, I suspect. In any event, I agree completely that expertise is never irrelevant, especially as AL puts it here:

Anyone who takes “expert knowledge isn’t important” as the lesson of the NPR story is getting it very wrong.

The heroes here weren’t the gamers, but the people who thought of using the game to come up with the solution. More broadly speaking, it isn’t expertise per se that’s the problem, but the fact that acquiring the expertise in the first place tends naturally to narrow one’s focus. The more people who look at a problem, experts or not, the more likely a solution, or at least a fruitful re-ordering of approaches to a solution, will be encountered.

12

Dragon-King Wangchuck 09.20.11 at 5:00 pm

Sure designing teh question to ask is a tough question in and of itself. Push pollers are great this. But something that’s being left out is scoring the answers. Not only is guessing numbers jelly beans an easy question to ask, it’s also easy to grade responses. So when you’re dealing with your more complex problems – your ability to analyze the solutions presented also become more complex. How do you find the median, let alone the mean, economic policy? How do you determine if it’s a better solution than the expert derived one?

Anyways, all the Young post says is that allowing intra-crowd collaboration allows biases to be magnified. OMG, what a news flash. In fact, isn’t that what teh whole “wisdom of crowds” thing was about? Getting a large sample of unbiased (i.e. non-expert) guesses performs better than getting biased ones (i.e. expert). I mean doesn’t “expert” just mean that the individual has previous experience and knowledge about the field?

His final line about

Maybe the real trick to exploiting the wisdom of the crowd is to recognise the most knowledgeable individuals within it

Well, duh. If Surowiecki had asked only that one guy who ended up giving teh awesome median answer, he’d sure have saved a lot of trouble!

13

AcademicLurker 09.20.11 at 5:05 pm

@William Timberman

I didn’t mean to imply that you were claiming that “expert knowledge doesn’t matter” is the take home lesson of the NPR piece. It’s just that, the internet being what it is, I’m anticipating a lot of that sort of thing in response to the story.

OCS @10 is also right. The correct answer wasn’t some weighted average of input from all the FoldIt players. It was the solution arrived at by a small number of players who are, for whatever reason, really good at this sort of task.

14

Matt McIrvin 09.20.11 at 5:09 pm

The distinction between groups and groupthink reminds me of a controversy concerning Wikipedia that I read about a few years ago. Jimbo Wales claimed at one point that the majority of contributions to Wikipedia actually came not from random strangers but from a tight-knit group of insiders, who had accounts and often knew each other. Someone else claimed that this was a misleading statement: the majority of edits came from insiders, but the majority of the article content actually came from short-term, drive-by contributors, often editing anonymously, who would contribute big article chunks and leave. The insiders would then clean up and reformat this stuff, fuss with it and get into arguments about deleting it.

I don’t know if detailed studies of this ever fell one way or the other, but the objection made sense to me. Experts on the subjects covered in Wikipedia are not necessarily likely to be the sort of people who hang out on Wikipedia long-term and get invested in the culture. My own short period of heavy Wikipedia editing subsided when I’d run out of things I knew about that seemed to fit gaps in the existing material.

15

Shelley 09.20.11 at 5:12 pm

Crowds have no wisdom when most of them have no source of news but Fox.

16

Jonathan Mayhew 09.20.11 at 5:27 pm

If the 30 farmers are each pretty good at guessing the weight of an ox, then the average of their guesses is going to be even better. It’s not that the crowd is wiser than the individual, but that the averaging technique eliminates the outliers.

Arguments from the diversity of perspectives seem very different in nature. Also, when you allow them to discuss their answers, then the highest status farmer’s view might prevail, negating the averaging effect. I’ve always thought you shouldn’t let juries deliberate. Just make them larger and take a vote.

17

James Surowiecki 09.20.11 at 5:28 pm

“But all of these examples are somewhat artificial, because they involve decisions that are made in a social vacuum. ”

This actually isn’t accurate. Most mundanely, in the case of the ox-weight-guessers or the crowd on Who Wants to Be a Millionaire?, people are kibitzing, talking to their neighbors, offering up their guesses out loud. So there is no vacuum. What there is is the opportunity, and incentive, for people to tune others out if they wish. More importantly, one of the most striking examples of the wisdom of crowds, to my mind, can be seen at the racetrack, where the betting crowd (which in the US sets the odds on horses, without the help of bookies) does an exceptionally good job of forecasting the likelihood of victory for each of the horses in a race. The crowd does that even though bettors can see what other bettors are thinking (via the odds on the tote board). So the notion that, as some have interpreted the study that Yong is writing about, even the “mildest of social influence destroys the wisdom of crowds” is plainly wrong. Having said that, it’s clear that social influence is a serious problem, which is why so much of my book deals with it. But I think the big problem with social influence arises primarily not when people are hearing many different voices, all offering up a different opinion, but rather when there is one voice, or a small number of voices, that are given undue weight. That’s why diversity is so fundamental to good collective decision-making.

Aside from that, I think Henry’s point about the value of alternative aggregative mechanisms is exactly right.

18

Henry 09.20.11 at 6:38 pm

bq. Someone else claimed that this was a misleading statement: the majority of edits came from insiders, but the majority of the article content actually came from short-term, drive-by contributors, often editing anonymously, who would contribute big article chunks and leave. The insiders would then clean up and reformat this stuff, fuss with it and get into arguments about deleting it.

That someone being CT friend and general culture hero, Aaron Swartz.

bq. But I think the big problem with social influence arises primarily not when people are hearing many different voices, all offering up a different opinion, but rather when there is one voice, or a small number of voices, that are given undue weight. That’s why diversity is so fundamental to good collective decision-making.

I’d like to +1 this suggestion by James – but also to suggest that it potentially leads in some radical directions. Jim Johnson and Jack Knight’s new book on the priority of democracy use similar claims to argue for a root-and-branch reform of democratic decision making procedures to minimize the overwhelming power of elites, and introduce as much diversity of background as possible. This has some interesting implications. It is a purely pragmatist justification of radical democracy – it does not argue for diversity because some groups of people have gotten screwed historically and need a helping hand (although it is certainly not a counter-argument to this claim either). Instead, it argues that greater diversity of voices in political debates is a good thing because it will be associated with better collective judgment and better decision making. Combining this approach to democratic theory with a specific focus on the mechanisms through which problems of epistemic cascades etc can be minimized could lead in some very fruitful directions.

19

straightwood 09.20.11 at 7:40 pm

@18

It is obviously much easier to corrupt, hijack, hotwire, pervert, or (supply pejorative verb for exercise of malign influence) a small number of “thought leaders” than a huge number of tightly networked thought contributors. Much of the recent economic misery is attributable to the simultaneous moral failure of multiple elite groups in finance, journalism, academia, and government. It turns out that the few thousand “thought leaders” could be cheaply bought. Beyond the evils of mob rule and corrupted elites lies the tantalizing possibility of Internet-based knowledge networks that are honest, reliable, and democratic.

20

Dave 09.20.11 at 7:50 pm

Seems like the whole notion of the wisdom of crowds is an ad hoc construction which people have used to explain missing steps between “a problem” and “its solution” when those steps involve some collection of people. Where in this construction appears the miraculous, emergent phenomenon of a group mind, I see multiple people latching on to possible solutions which are already in place before the crowd gets there (there is a finite number of possible weights to ascribe to a yak, etc.).

In other words, the procedure would seem to be: here is a problem, here is a range of solutions; people latch on to one and not others and so a hierarchy of value is created. Isn’t that just, like, politics?

21

kharris 09.20.11 at 8:01 pm

“How we organize groups of people to come to useful conclusions is a different problem.”

“What this shows is that crowdsourcing can be effective if the way that the crowd interacts with the problem and each other is appropriately structured for the task at hand.”

When I read Surowieki’s book, this was pretty much what I thought it was about. There is no guarantee that a mob will get the right answer, but knowing that, under proper conditions, crowds are pretty good at generating answers, the challenge is to generate the proper conditions. This “wisdom of crowds” business suggests a natural resource, but not one that’s free for the taking. If, and only if, you build a proper water wheel can you use it as a source of power. Only if you build a proper system for extracting aggregate answers can you get information from crowds.

Remember the missing submarine story? Picking the right people, asking them separately where the sub is, that’s what made it possible to find the boat. Not the same as guessing the weight of an ox at all.

22

More Dogs, Less Crime 09.20.11 at 8:10 pm

Robin Hanson’s work on prediction/decision markets or “futarchy” is based on those who think they have expertise opting in to give their opinion (while putting some money where their mouths are). I don’t know if he had much about preventing people from persuading others to their viewpoint, but I do recall anonymity of edits to the collective forecast being a recommended feature.

23

straightwood 09.20.11 at 8:16 pm

Where in this construction appears the miraculous, emergent phenomenon of a group mind

We have seen it emerge in rudimentary form in the Wikipedia editors, who, contrary to all pessimistic predictions, have created an unparalleled store of reliable public knowledge out of the undirected input of a vast group of knowledge contributors. Given a hospitable Internet medium, Wikipedia poves that the truth is a self-organizing phenomenon.

24

Dave 09.20.11 at 8:21 pm

Wikipedia poves that the truth is a self-organizing phenomenon

Does it? I would think the truth is what it is, and given some number of intelligent, typing apes and sufficient yet finite time, it will be articulated successfully, or at least adequately. A beautiful anarchy it is not.

25

JakeB 09.20.11 at 8:29 pm

I’m reminded of when the Monty Hall problem was published in Parade magazine some 20 years ago and many many people, including a number of mathematicians, wrote in to say that the given answer was wrong (while in fact it was correct). I assumed that any decent mathematician would get the answer right (I myself was wrong at the time, I will say, but I am not a mathematician). This says only, I think, that in some cases it’s even difficult to define who has expert knowledge in an area, meaning that incorporating expert knowledge becomes even more problematic.

26

bianca steele 09.20.11 at 8:39 pm

AFAICT, the Parade answer is correct if you assume there is always a prize on the board. If the MC can take the prize off the board and leave you with no win, your answer should change.

27

bianca steele 09.20.11 at 8:46 pm

And part of the reason professionally trained mathematicians got it wrong may have been that if you set up the problem the way you were taught to set up the last similar homework problem you had, the Parade answer is wrong. Arguably, the problem as published was unspecified (didn’t say originally whether the MC knew where the prize was, so didn’t say whether he could take it away). So the lesson of the puzzle is either (a) real-world problems aren’t like your old undergraduate homework problems and you should look again and see whether the problem is underspecified, (b) in your textbooks things might have been fair but in the real world the guy in charge can take stuff away from you.

28

Dave 09.20.11 at 9:07 pm

in your textbooks things might have been fair but in the real world the guy in charge can take stuff away from you

Quite. Or hand out carrots. The wisdom of crowds can probably be replicated endlessly in controlled circumstances, but otherwise we get the wisdom of the hegemon.

29

rea 09.20.11 at 9:15 pm

CT friend and general culture hero, Aaron Swartz.

The linked blog post, in which he says that he downloaded the whole Wikipedia archives to a cluster of computers, foreshadows recent events. He sure likes big piles of data, doesn’t he?

30

Tim Wilkinson 09.20.11 at 9:17 pm

The protein case sounds like a heuristic which worked because the right (or merely best?) answer was independently verifiable once it had been arrived at. It was also presumably fairly easily verifiable, and incorrect answers easily identified and rejected too. It sounds as though the candidates in this case could be easily sorted so as to eliminate unuseful solutions.

This is different from using crowds to generate answers which can’t be checked independently, or only at a cost as great as solving it independently. That’s when you start getting problems with unknowns and possibly even unknown unknowns, I don’t know. (This assumes that the independent means is one we have more trust in than the crowdsourcing one, I suppose).

(If these pearls of wisdom should inspire anyone to solve the P=NP problem, I wouldn’t refuse a small share of the prize money.)

Wikipedia, used properly, is closer to a heuristic than a reliable generator of ‘solutions’.

See also: various conceptions of the operation of markets in solving allocation problems, how to check they aren’t working, why we should be at all interested in a local Pareto-optimum, the EMH, etc. Of course much markety stuff supposedly involves pooling information (about one’s prefs) without actually sharing it.

31

Henri Vieuxtemps 09.20.11 at 9:21 pm

I don’t think wikipedia and other internet phenomena are in the “wisdom of crowds” category, but rather something like ‘credential-less enthusiasts’. Activists. Obsessive cranks. Not a random selection by any stretch of imagination.

32

Bloix 09.20.11 at 9:32 pm

#16 – on guessing the weight of an ox – if you ask people at a state fair to guess the weight of an ox that is standing in front of them, you’re going to be asking people who know a lot about how the weight of barnyard animals correlates to their appearance. But what if you ask them the distance of the sun to Jupiter? the percentage of the US budget spent on foreign aid? the Swahili word for “hello”? Pooling educated guesses from expert individuals who have had the opportunity to gather relevant information (in the ox case, by direct inspection) may lead to a good approximation of the correct result. But pooling random guesses from ignorant people will probably not.

#26 &27 – yes, there is always one and only one valuable prize, like a luxury car, “on the board” (ie behind one of the three curtains). The other two have a goat or some such gag gift. Yes, Monty does know where the real prize is. No, Monty can’t take it away. These are the rules of “Let’s Make a Deal,” and Vos Savant reasonably assumed that you knew the show.

I personally had a lot of trouble accepting that Marilyn Vos Savant was right, because I couldn’t understand how, after you’d chosen one curtain, Monty showing you a goat behind a different curtain communicated any information about the true location of the prize. And if Monty is not communicating information, then the odds from your point of view should not change from 1/3.

What it took me a long time to understand was that, when you choose a curtain hiding one of the two goats – which will take place 2/3’s of the time – Monty’s hand is forced. He can’t show you the Lincoln, so he must open the curtain hiding the second goat. That means that 2/3ds of the time, the curtain he doesn’t open will be hiding the Lincoln. So, 2/3ds of the time, Monty is telling you by inference where the Lincoln is. One-third of the time, of course, your first choice will be the curtain hiding the Lincoln, and Monty’s hand is not forced – he can show you either one of the goats.

This means that 2/3ds of the time, if a player switches from the curtain she originally chose, to the one that Monty didn’t open, she’ll be choosing the Lincoln, and 1/3d of the time, she’ll be abandoning the Lincoln for a goat. Any individual player can’t know if she’s in the 1/3d or 2/3ds. But she can double her chances of success by switching her pick.

This is something any mathematician should have been able to figure out on the back of an envelope. The reason they didn’t is that they were relying on the same unthinking knee-jerk reaction to the problem that I had (Monty’s not giving you any new information, so the odds can’t change.)

The responses to Vos Savant didn’t disprove the Wisdom of Crowds; they disproved Blink.

33

straightwood 09.20.11 at 9:38 pm

How soon we forget. Wikipedia was widely derided, criticized, and banned from schools just a few years ago. Commercial encyclopedia publishers declared it could never rival a professionally produced product. Now, at over 3 million English articles and 18 million in all languages, nobody is laughing any more.

Wikipedia is a classic example of the reluctant acknowledgement of significant novelty by Internet skeptics, what was an unproven and dangerous nostrum on Friday afternoon becomes an unimportant commonplace on Monday morning. Nothing to see here, move along.

34

Alex 09.20.11 at 9:42 pm

I think the core insight of Andrew King’s paper is that real information is good, and when it comes in a form that you want to listen to, better. It shouldn’t be surprising (although it may be a relief) that the crowd, given some feedback, converged on a good estimate.

Perhaps a good research question would be to determine the conditions under which this would not happen? In which noise won over signal. Good work has been done on this in cognitive neurology, psychology, flight safety engineering, and international politics (Robert Jervis ftw).

Economics?

Obviously, anyone working on the basis that stuff is decorrelated when it seems handy deserves to be shot severely criticised these days, but Surowiecki can’t be blamed for writing long before the great crisis.

35

Alex 09.20.11 at 9:51 pm

Picking the right people, asking them separately where the sub is, that’s what made it possible to find the boat. Not the same as guessing the weight of an ox at all.

During the second world war, some of the British research centres and industrial plants that invented a couple of things like jet engines and radar and computers held regular “Sunday Soviets” – i.e. meetings to discuss a particular problem that were officially just private, unofficial gatherings that anyone involved might chip into. Specifically, the point was that anyone could speak up without fearing hierarchical revenge. (The name was telling.)

If there’s owt on the ‘net like that it’s IETF. It is less like, well, asking a lot of yelling fucksticks which welfare provision to cut.

36

Henry 09.20.11 at 9:51 pm

Straightwood – you have an argument, surely, and one that I partly agree with, but you have made it, multiple times, and at length, across this and various other posts. One might even say that this is starting to become an example of the problem of people who “assert their views most forcefully and unbendingly” referred to above, although I do not know that this leads to unanimity so much as to others deciding that it’s not worth the bother to continue arguing. To avoid this becoming a conversation killer, I suggest that in future, where you want to make the twin claims that

(1) The Internet is all sorts of sliced democratic awesome for decentralized decision-making.

and

(2) Elites and foolish commenters refuse to recognize this wonderful set of developments, where they are not actively trying to cripple it.

that instead of saying this at length, you just do a short comment saying “the usual,” and if you like, hyperlinking to #19, so that other CT readers can stipulate to having read your argument and respond to it or move on as they choose. If you like, you could slightly vary it, by saying “the usual, with particular reference to Wikipedia” or whatever. Or, even better, you could start to vary your argumentative repertoire a bit and try not to keep cutting in on discussions that are only loosely related to your overwhelming interest in this topic .

37

LFC 09.20.11 at 9:58 pm

straightwood:
We have seen it emerge in rudimentary form in the Wikipedia editors, who, contrary to all pessimistic predictions, have created an unparalleled store of reliable public knowledge out of the undirected input of a vast group of knowledge contributors.

Get off it. Wikipedia articles vary wildly in quality; some are reliable, some aren’t. No one copyedits the damn thing — typos and grammatical errors abound, and some of the sentences aren’t even sentences. Management puts up a notice saying “this article needs more citations,” some one supplies the citations, but management never takes the notice down. The thing is not especially easy to edit in conformity with Wikipedia norms, especially for someone unfamiliar with the Internet and codes and all that — yes, straightwood, there are such people. In short, Wikipedia is a valuable resource in some ways (I gave it a small contribution on one occasion during one of their fund drives) but in other ways it’s a mess. It is not consistently reliable and by its own admission it does not rival professionally produced encyclopedias. (Students should not be allowed to cite it in papers, imo, and should be told that it must be used with even more skepticism than most sources.)

38

LFC 09.20.11 at 10:01 pm

I posted 37 before seeing Henry’s 36.

39

Watson Ladd 09.20.11 at 10:25 pm

Anyone who has played mafia will realize that crowds can be damned funny things. One of my friends once convinced us all he was an inspector who got the wrong answer 50% of the time. We were all math people, and he successfully fooled us with an absurd story. Any one of us who thought about it harder would realize the lie, but because of the crowd dynamics we did not.

40

bianca steele 09.20.11 at 11:34 pm

I wonder whether it’s felt that the old Usenet FAQ process worked better or worse than Wikipedia. One big difference was that the FAQ was owned by one person. If there were huge differences of opinion, there could be two FAQs or two editors of different sections of the FAQ. Debate was done in the open, in view of people who would be using the thing eventually.

Newbies were probably sometimes rudely met with “read the FAQ,” but having read the FAQ, its contents remained open for debate (unless there was a line in the FAQ saying “please do not start a discussion on this topic”).

I don’t know whether or not the founders of Wikipedia were familiar with that and wanted something better. I also don’t know how they came up with their standards for what sources are acceptable (sometimes an entry is based almost entirely on corporate marketing output, with no editorial comment, and other times an entry based on the informational kind of corporate publications that seem fairly reliable, for how they’re used, is marked as unacceptable), but they did seem to have something definite in mind w/r/t standards right from the start.

41

straightwood 09.20.11 at 11:45 pm

@36

I am properly chastened. In future, I shall address the narrow context of each thread and show the good discipline and admirable restraint of other commenters. The future needs no guardians, and portents of its approach will certainly not escape the watchful eyes of those officially entrusted with heralding its arrival.

42

Watson Ladd 09.21.11 at 12:09 am

FAQ’s were sometimes creations of pure spite and malice. C++ has three FAQs, plus a FQA. The IETF process creates some good things, but it also leads to massive failures by rewarding running code over the Right Thing. Of course eventually better standards replace them, but that’s more evolution then any group process. Committees also produce complex standards due to internal dynamics, where as the lone wolf can make something simple and correct.

43

John Quiggin 09.21.11 at 12:20 am

Actually, the Monty Hall problem is even trickier. To get the answer right, you have to understand, not only that there is always a prize, but that Monty always opens a door without a prize. If he chose at random between the doors the contestant hadn’t picked, ending the game whenever the prize was revealed, then there would be no value to switching.

Since economists routinely get bagged here on CT, I’ll point out that this is a question competent economists (trained since about 1980 or up with the literature) always get right first off, and not-so-competent economists slap their foreheads when their error is pointed out.

44

Tom T. 09.21.11 at 1:05 am

I’ll point out that this is a question competent economists (trained since about 1980 or up with the literature) always get right first off

As do competent readers of Parade! ;-)

45

gordon 09.21.11 at 1:16 am

Straightwood’s problem with subversion of expert elites has a corollary for the non-expert policymaker. Such policymakers have two traditional strategies; poll the experts or poll the most expert experts. These strategies result in policy pronouncements beginning either “Most experts agree that…” or “Leading experts agree that…”. If you can’t rely on either of these two traditional strategies because of expert subversion, most non-expert policymakers are bereft.

Note that this doesn’t imply that no experts are right; it only implies that those two strategies for getting the right answer from a group of experts no longer work. The non-expert policymaker has no other way of finding the guy(s) who know(s) the answer.

By the way, I just used Wikipedia to find out what IETF means.

46

Zora 09.21.11 at 2:54 am

Speaking as someone who was once a high-count editor on Wikipedia: articles touching on religious/ethnic/nationalist issues are battlegrounds eventually won by the group with the largest number of determined editors. Hence the articles re Islam relapse into Muslim piety, re Indian history and culture are claimed by Hindutvadis, re Iran are annexed by nationalists, re the caliphate by Hizb-ut-Tahrir, etc. It is quixotic to fight against the ignorant hordes.

The fight for boasting rights over the ninth-century mathematician al-Khwarizmi was particularly memorable. Was he Arab? Persian/Iranian? Uzbek? I revisited the article just now. Looks like the Persians won.

47

Sev 09.21.11 at 2:58 am

#11 ” More broadly speaking, it isn’t expertise per se that’s the problem, but the fact that acquiring the expertise in the first place tends naturally to narrow one’s focus. The more people who look at a problem, experts or not, the more likely a solution, or at least a fruitful re-ordering of approaches to a solution, will be encountered.”

This calls to my mind the way in which chess programs through brute force sometimes find winning moves/sequences which are unlike those of a strong human player- the computer’s trial moves playing the role of the crowd which outguesses the heuristics based approach of the master.

48

Henry 09.21.11 at 10:55 am

bq. Speaking as someone who was once a high-count editor on Wikipedia: articles touching on religious/ethnic/nationalist issues are battlegrounds eventually won by the group with the largest number of determined editors.

From a political theory piece that I wrote with Melissa Schwartzberg on these issues.

bq. The norms of Wikipedia are articulated in language that emphasizes civility and mutual respect. Nonetheless, there is strong empirical evidence that edit wars are quite common.11 Aniket Kittur, Bongwon Suh, Bryan Pendleton, and Ed Chi examine the network relations between and among editors of pages where there are clear clashes over evidence, and find that clear groupings emerge.12 As Travis Kriplean, Ivan Beschastnikh, David McDonald, and Scott Golder argue, Kittur et al.’s visualizations suggest that different articles have “in-groups” and “out-groups”—groups who effectively claim authority over a particular article, and groups whose points of view are likely to be rejected.13 As Kriplean et al. further note, policies are often invoked on controversial topics less as means of generating consensus than of bolstering the case for one particular version of the article and its backers, and denigrating those who disagree with this version.

bq. More generally, Wikipedia editors are supposed to seek consensus. How consensus is to be measured is not precisely defined, although the policy specifies that silence equals consent. Following Philippe Urfalino, we may consider the Wikipedia policy goal more precisely as reaching “apparent consensus”: the aim is not unanimity, but absence of dissent.14 The claim of consensus on a particular point is itself often used as a strategic weapon in fights over what articles should look like. Urfalino suggests two key reasons why opponents may remain silent in the presence of apparent consensus even though they continue to reject a proposal: because of the relative power of the proposal’s partisans (their capacity to retaliate, including by withholding support in future circumstances) and because such ongoing dissent would be regarded as hubristic once one’s viewpoint has been heard and rejected.15 To the extent that some networks of editors are more powerful than others, the norm of apparent consensus may result in the marginalization of hierarchically subordinate networks.

bq. If it were the case that epistemic humility, rather than subordination, caused those in the minority on an issue to retreat, we might say that the norm of apparent consensus was eliciting attractive moral behavior on the part of the minority.16 Unfortunately, in light of Kriplean et al.’s work, we suspect that power, rather than the recognition of one’s fallibility, is the mechanism generating apparent consensus in many, if not most, controversial cases.

bq. The Wikipedia project has created a considerable intellectual resource. However, this resource does not necessarily emerge from a smooth-running and consensual process, much less an anarchistic bazaar. Instead, it emerges from a rule-based system in which both software code and semi-informal rules play an important role. Moreover, one of the key problems that Wikipedia has to confront is precisely that of reconciling minority and majority points of view when they differ starkly from each other. Some basic features of Wikipedia’s software code (open editing) favor minorities, yet semiformal rules and procedures have sprung up in part to limit the power of recalcitrant minorities to veto change or to behave in ways that other editors perceive as irresponsible. The result is that outcomes are often the result of power struggles rather than real consensus.17

49

straightwood 09.21.11 at 12:44 pm

A daily demonstration of the wisdom of crowds can be found in the cognitive dissonance effect observed on blogs associated with mass media opinion leaders. A good example is the experience of the hapless Joe Klein, a leading journalistic “authority” on American politics. For years, on the Newsweek blog, “Swampland, ” Klein has had many of his published nostrums shouted down by huge majorities of commenters on this blog. The reader commentary was generally informed and well argued, but it had no impact whatever on Klein’s status in the hierarchy of the commentariat.

What this phenomenon illustrates is the importance of cultural inertia in preserving the leadership role of those who have attained positional power, either through a personal brand franchise or a grant of precious shelf space in the media marketplace. Demonstrably mistake-prone and out-of-touch writers, like Klein, David Brooks, or Tom Friedman, are able to persist in their positions, even in the presence of overwhelming dissent from their own readers, as revealed for all to see in Internet discussion threads.

Thus, even unshakeable evidence of the documented superiority of any wisdom of crowds phenomenon will be insufficient to shift the cultural paradigm of “name brand” thought leadership. The mechanism by which such a shift may occur remains obscure, and is a worthy subject for academic research.

50

Marcus Pivato 09.21.11 at 1:54 pm

It seems to me that this discussion of `wisdom of crowds’ is conflating at least four different phenomena.

The first phenomenon is simply the fact that many NP-hard computational problems are best solved using massively parallel computation. An NP-hard problem is a `needle in a haystack’ problem: there are a very large number of candidate solutions, and there does not appear to be any faster way to find the right solution than to systematically search through all the possible candidates (perhaps with some sort of `gradient descent’). However, it is relatively easy to see whether or not any particular candidate is the correct solution. A good example is the protein-folding problem. If the evaluation of candidates can be performed by humans (perhaps with minimal machine assistance), then an NP-hard computation can be effectively `crowd-sourced’.

The second phenomenon is the Law of Large Numbers and similar results: if you have many noisy observations of some unknown variable (e.g the weight of an ox), and the noise process is independent with zero mean, and you average these observations, then you will get a very good estimate of the underlying variable. In social choice theory, a variant of this result is the Condorcet Jury Theorem. There is now a sizable literature on the CJT and related phenomena; this area is sometimes called `epistemic social choice theory’. However, most of these results depend heavily on the errors of different voters being independent random variables. (Franz Dietrich and Kai Spiekermann have recently proposed an extension of the Condorcet Jury Theorem to a model where the voters have correlated errors.)

The third phenomenon is that of emergent computation in certain multi-agent systems. The prototypical example here is the way in which a market economy (supposedly) converges to a Pareto-optimal equilibrium state. This equilibrium (supposedly) emerges from the myopic and self-interested actions of the individual agents. Of course, there is a great deal of controversy about whether real markets ever converge to Walrasian equilibrium. And even if they do, the equilibrium can be sub-optimal due to externalities and other issues. And even if it is optimal, Pareto optimality is actually a very weak form of social optimality, which ignores inequalities and social injustice. Nevertheless, there appears to be evidence that, at least in some settings, the market (with some government intervention) is surprisingly good at allocating resources in a socially efficient way, despite the fact that no one is driving the car.

The fourth and last phenomenon is what is now called deliberative democracy; the idea that democratice self-governance functions best when it takes the form of a civil, reasoned public discussion between basically rational, publicly-minded citizens. Very roughly speaking, the central thesis (or at least, hope) of the deliberative democracy literature is that, through reasoned public discussion, the conflicting biases, prejudices, ideologies, ignorance, and myopia of different citizens will cancel out or be moderated or corrected, and the emergent consensus will approximate some ideal of a rational, well-informed, impartial observer’s vision of the public good. Of course, there is a lot of debate about how, or whether, this will actually work in real life. But certainly even a poorly functioning delberative democracy would be much less dysfunctional than the infantile popularity contests and partisan circuses which drive public policy in most Western democratic states today.

All four of these phenomena are real and powerful epistemic mechanisms (at least, under certain conditions). But they are different mechanisms, with different domains of applicability and different failure modes. So it is probably not constructive to mix them all together under the rubric of `wisdom of crowds’.

51

LFC 09.21.11 at 3:14 pm

The article linked by Henry (at 48 above) argues, among other things, that different kinds of rules are appropriate for online projects with different goals: Wikipedia’s rules should be geared to encourage toleration of diverse pts of view, b/c its goal is “knowledge generation”; Daily Kos’s goal is collective action, thus it should have different rules. Quibble: is Wikipedia’s goal accurately described as ‘knowledge generation’? After all, Wikipedia explicitly bans any original research from appearing in its articles (though the ban may not be well enforced). This suggests that Wikipedia’s goal is perhaps better described as ‘knowledge dissemination’ rather than ‘knowledge generation’.

52

mor 09.21.11 at 3:18 pm

Henry,
Sense about wikipedia from a vet.
http://ocham.blogspot.com/search/label/wikipedia

53

William Timberman 09.21.11 at 3:54 pm

Sev @ 47

It’s an interesting analogy, but I don’t see it as how the most interesting examples of crowd-sourcing work, although I’ll admit that the distributed processing with the SETI PC widget from some years back was pretty clever and useful.

Under the right circumstances, such as a functioning representative democracy, or a cleverly constructed bit of querying machinery, as in the FoldIt example, what you have isn’t massive trial and error, rather it seems to be the controlled engagement of many minds, which each has its own history of perceptions, cognitions, its own memory pathways, etc.

If what is keeping you from finding a solution is a poverty of associations, crowd-sourcing definitely helps in a way that even massive repetition may not. And then, of course, there are dreams, which serve a similar function in single individuals. (I’m thinking here of Kekulé’s epiphany about the ring structure of benzene, and such-like reports from other theorists.)

54

Frank in midtown 09.21.11 at 6:30 pm

I’m all about the wisdom of the crowds, except every now and again we find out that what everyone knew to be true was wrong. I would say it’s just like regression to the mean which works every time, nevermind that means move.

55

chris y 09.21.11 at 10:14 pm

I’ve always thought you shouldn’t let juries deliberate. Just make them larger and take a vote.

This was how they worked in Periclean Athens. The system gets a bad press because an Athenian jury condemned Socrates on this method and the European intelligentsia has never forgiven it for this. Personally, I think it’s a damn good idea and that the outcome of Socrates’ trial is a point in its favour.

Bloix, thank you. That’s the first natural language account of the Monty Hall problem that I’ve ever found convincing without qualification. Please can I nick it for future reference.

56

F 09.22.11 at 4:54 am

It is not surprising that Wikipedia is poor at dealing with politically contentious issues. What is surprising is that Wikipedia is excellent at everything else. Which makes it useful in 90% of all applications.

57

Gene O'Grady 09.22.11 at 5:41 am

I’ll pass on the Socrates case, except to note that pretty clearly he got the verdict (and especially the penalty) he wanted, and whether you take Plato’s explanation or Xenophon’s depends on whether your opinion of Plato is as low as mine, but the condemnation of the generals after Arginusai is a far better indication of how hideous that system could be.

58

chris y 09.22.11 at 9:09 am

but the condemnation of the generals after Arginusai is a far better indication of how hideous that system could be.

Very good point. On the other hand, that case wouldn’t come to trial in most countries* these days. There are checks in place to prevent it.

*There are exceptions. We await the verdict on the Italian seismologists with interest.

59

LFC 09.22.11 at 2:28 pm

It is not surprising that Wikipedia is poor at dealing with politically contentious issues. What is surprising is that Wikipedia is excellent at everything else.

Several months ago I happened to look at the Wikipedia entry on Talcott Parsons. Lots of info, not well written, lots of typos and infelicities, with open arguments at the end of the piece about who did or did not attend his funeral (or was it memorial service). If you call that excellent, fine; I call it a mess. (Maybe this particular entry has been improved since I looked at it; chances are it hasn’t.) Wikipedia is very good on some things, but it is not “excellent on 90% of applications.”

60

Tim Wilkinson 09.22.11 at 2:50 pm

WP – viewed as a source rather than a source of sources – is pretty good for maths and science, where (a) there’s little motive for bias or falsification, (b) the results can be checked for consistency with what is already known, so become almost self-verifying, (c) it’s unlikely that anyone is going to convince themselves they know all about the topic when in fact they don’t. In some cases, where bias seems highly unlikely, you can tell with reasonable certainty that the entry is basically an article written by someone who knows what they’re talking about, so for general knowledge purposes it’s probably (but defeasibly) good enough.

For most things, it’s not to be relied up on but treated as a convenient selection of links to other sources (which in turn may or may not be suitable for being relied on). I think the organisers should take more cognisance of this and require sources to be checkable – e.g. it should be stressed much more than it currently is that ‘citation needed’ = ‘do not accept’; where printed matter is referenced, an extensive verbatim quote with page number and edition should be supplied; dead links should not be allowed to stand as sources.

Again, a key issue in the wide range of things called ‘crowdsourcing’ is independent verification. See ##30, 50, indeed all comments numbered with multiples of 10.

61

Alex 09.22.11 at 3:07 pm

The other reason why wisdom-of-crowds arguments are appealing is that most people find liberty, representation, and the respect of individual agency to be normatively good. If you think you have a right to an opinion, that the people involved in a decision have a right to be heard, and that their status as reasonable beings is valuable in and of itself, you’re unlikely to derive a theory of optimal public choice that leads towards dictatorship.

This is a little “duh” but worth remembering.

Meanwhile, Monty Hall. The problem I always have with it is that the formulation seems to suggest that you would be better off picking a door again no matter which door you pick second time around. You know, after Monty shows you a goat, that there are now one goat and one Lincoln remaining behind the two doors. (Monty, as Bloix and the Quig point out, always shows you a goat.)

But this tells you nothing about which is which. If you pick again, you have a 50% chance of the Lincoln as against 33% first time out. By definition, if the chance of winning the Lincoln is 50% for one door, it must be for the other, as it has to add up to 100%. So you end up with the conclusion that it is rational to change your mind, whatever you change it to. In fact you’re better off as soon as you undertake a second act of choice – even if you don’t actually shift. So what is the difference between thinking about it, then picking the same door again, and not thinking about it?

Obviously, if you picked the door Monty opened, you should change (unless you like one goat more than all the goats you could buy for the resale price of the Lincoln) and indeed it doesn’t matter which you pick. But I think that’s a reasonable exclusion.

62

Henri Vieuxtemps 09.22.11 at 3:29 pm

The Monty Hall thing becomes clear if you change the rules like this: we have a deck of cards, 52 cards face down in front of you, and to win you need to guess which one is, say, the ace of hearts. You pick one at random and say: ‘this is he ace of hearts’. The host knows which one is the ace of hearts, and he’ll now turn around and show you 50 cards none of which is the ace of hearts. So, at this point there are only two cards still left lying face down: the one you picked, and another one. The ace of hearts is one of these two. You now have the possibility to the change your choice. Should you? Why, of course you should, and now it’s intuitively obvious.

63

QB 09.22.11 at 4:51 pm

Bloix #32 :

“I personally had a lot of trouble accepting that Marilyn Vos Savant was right, because I couldn’t understand how, after you’d chosen one curtain, Monty showing you a goat behind a different curtain communicated any information about the true location of the prize. And if Monty is not communicating information, then the odds from your point of view should not change from 1/3.”

Your intuition that Monty’s action is independent of whether your door contains the prize, and so the probability of your door containing the prize is unchanged by his action and remains 1/3, is correct. But then it follows that the complement event (your door does not contain the prize, i.e. the prize is behind the remaining unopened door) has probability 2/3 and you should switch.

As Henri #62 says, the result is probably easier to see in the case of a game with a larger number of doors… 52, or a thousand, or a million.

64

Bloix 09.22.11 at 8:38 pm

#61 – “But this tells you nothing about which is which.”

Ah, but it does, it does – 2/3ds of the time it does. This is what was so hard for me to understand. (I once had dinner with a fellow who gave me the H.V. explanation in #62, and I still wouldn’t accept it because I couldn’t understand the mechanism – and once I figured out that he was right I was so angry I never spoke to him again.)

Alex, bear with me.

Before she chooses, the contestant’s odds of choosing correctly are 1 in 3 (1/3).

Randomly, she chooses a door that hides a goat. From her point of view, her odds have not changed – she says to herself, “I have a 1/3d chance of having chosen correctly.” (Monty knows, of course, that at this point her chances are zero.) Now Monty must open another door. Does he open the door hiding the Lincoln? Of course not. If he did, the Contestant ‘s chances would rise to 100%. Monty does not want that. Therefore, he must open the door hiding another goat.

Now the contestant can see where one of the two goats is. Where is the other? If she reasons heuristically, she will say, “behind the two closed doors, there is one goat and one car. The odds are equally unknown for where they are, so there is a 50% chance that the car is behind door 1 and a 50% chance that it is behind door 2. So it is a matter of indifference whether I switch.”

This, Alex, is what you’ve concluded. But the fact that the location of the car and goat are both unknown does not imply that they are equally unknown. And in fact they are not equally unknown. Here’s why:

The contestant has a 1/3 chance of choosing the Lincoln the first time out, and a 2/3ds chance of choosing a goat, right? Put another way, if the contestant chooses 99 times, she will choose the Lincoln 33 times. If she has a hard and fast rule, “I never switch,” she will win 33 Lincolns out of 99 contests. So, Monty’s showing her a goat does not increase the odds that the door she chose is the right door to 50%. It is still only 1/3rd. Even though he has opened the door, her choice is correct only 33 out of 99 times.

Where is the Lincoln the other 66 times? It is behind one or the other of the two remaining doors. And once Monty opens on of those doors and shows the contestant a goat, the car must be behind the remaining door. That is, out of the 66 times, 100% of the time it will be behind the remaining door. 66% of 100% is 66% – so 2/3ds of the time, the car will be behind the door that the contestant did not choose and that Monty did not open.

But where in this is the communication of information?

Well, because the contestant has a 2/3ds chance of choosing a goat, of 99 choices, she will choose a goat 66 times. Once she’s chosen a goat, Monty must open the door hiding Goat 2. And once he does that, he has communicated to the contestant that the Lincoln is behind the door that the contestant has not chosen. That is, out of the 66 times she chooses a goat, she will get a Lincoln 66 times, and out of the 99 times she chooses, she will get the Lincoln 66 times.

So, even though after Monty opens a door, the location of the Lincoln and the remaining goat are both unkown, they are not equally unknown. 1/3d of the time the Lincoln is behind the door you’ve chosen, while 2/3ds of the time it is behind the door you didn’t choose and that Monty did not open.

The reason this is hard is that although 2/3ds of the time, Monty has given the contestant useful information, and 1/3rd of the time he’s provided useless information, the information he’s provided is on its face the same – he’s truthfully told us where one of the goats is. The difference is not in the information, but in the contestant’s choice.

Comments on this entry are closed.