The Market for Predictions

by Henry Farrell on August 25, 2009

Andrew Gelman and John Sides have a “very good piece”:http://bostonreview.net/BR34.5/ndf_election.php at the _Boston Review_ on the reasons why journalists and pundits got so much about the 2008 presidential election wrong, with responses by Rick Perlstein, Mark Schmitt and others. In their response to the response, John and Andrew say:

bq. Will these efforts get political scientists invited to Joe Scarborough’s kaffeeklatsch? Probably not. The media ecology fetishizes novelty in reporting and certainty in commentary. And yet the academic study of elections shows that what is certain is almost never new, and what is new is almost never certain. We might only bore Fox & Friends with our scholarly qualifications and caveats, or simply look foolish trying to present our research in soundbites.

This has interesting overlap with this “review article”:http://www.nationalinterest.org/Article.aspx?id=22040 by Philip Tetlock on political economic forecasting in _The National Interest._

bq. To appreciate why, it is helpful to think like a game theorist—to take the forecasters’ perspectives and consider the incentives confronting them. Both private- and public-sector prognosticators must master the same tightrope-walking act. They know they need to sound as though they are offering bold, fresh insights into the future not readily available off the street. And they know they cannot afford to be linked to flat-out mistakes. Accordingly, they have to appear to be going out on a limb without actually going out on one. That is why (with the interesting exception of Bueno de Mesquita), they so uniformly appear to dislike affixing “artificially precise” subjective probability estimates to possible outcomes—the only reliable method we have of systematically tracking accuracy across pundits, methods, time and contexts. It is much safer to retreat into the vague language of possibilities and plausibilities—things that might or could happen if various difficult-to-determine preconditions were satisfied. The trick is to attach so many qualifiers to your vague predictions that you will be well positioned to explain pretty much whatever happens. China will fissure into regional fiefdoms, but only if the Chinese leadership fails to manage certain trade-offs deftly, and only if global economic growth stalls for a protracted period, and only if . . . And if you venture specific policy recommendations—such as invading Iraq or deregulating financial markets—make sure to leave yourself the fallback position: “Well, of course, my recommendation was fundamentally sound, but how was I to know that the idiots in charge would implement things so badly. If only they had . . . ”

bq. Having mastered this subtle balancing act, why should these private- and public-sector pundits open their reputations and livelihoods to the unpredictable risks of competing against each other in level-playing-field forecasting exercises?

As Andrew and John say, there is a premium on bold, ‘innovative’ claims in the market for pundits, but as Tetlock suggests, there is also some room for artful hedging, so that apparently unambiguous predictions turn out on closer examination to be carefully hedged. Myself, I’d add three qualifications.

First – that the bold, innovative claims can’t be too bold, or too innovative. They certainly shouldn’t challenge basic precepts of American public debate such as the awesomeness of US world leadership. Even better if you can dress mutton as lamb, by presenting old chestnuts as if they were exciting, contrarian insights (a good half to two thirds of the average US policy journal consists of articles trying to pull off this trick, and usually failing).

Second, that the incentives to engage in careful hedging may be considerably weaker than Tetlock suggests. There aren’t any very good repositories of public memory in the arena of US public debate – pundits who have made grand pronouncements that turn out to be horribly wrong usually aren’t called on it. Here, blogs have served as a modest corrective, reminding op-ed writers of past confident assertions that they would likely have preferred to have remained buried. However, the emphasis here should be laid on the word ‘modest.’ Amity Shlaes, Max Boot et al. still hold nice fellowships at the Council on Foreign Relations, despite the scorn and ridicule that was rightly heaped upon their heads.

Third, that in any event, there are herding effects – the costs of making bold, confident assertions that turn out to be wrong are much lower when lots of other prominent people are making the same public mistake. In the event that the pundit is called on his mistake later, he can not only point to the fact that others made the same mistake too but can likely rely on those others to help circle the wagons against the nay-sayers. I still haven’t read a good intellectual history of the Iraq war debate in the US, but these three factors all seem to me to have played a significant role in producing the horrible public debates that we saw in 2002-2004. I “mentioned”:https://crookedtimber.org/2009/08/20/brandin/ Diego Gambetta’s wonderful book on signalling among criminals in my last blogpost – it seems to me that there is scope for a similar book about signalling and strategic interaction among US foreign policy, journalistic and political elites interested in coordinating with each other, accruing resources etc. This could perhaps really get at the underlying pathologies of political debate in this country.

{ 15 comments }

1

TGGP 08.25.09 at 10:45 pm

When I saw the title of this post I assumed you were going to be talking about Robin Hanson’s “prediction markets” idea. Like both Hanson and Bryan Caplan, I’m in favor of betting norms as a tax on lazy punditry.

They certainly shouldn’t challenge basic precepts of American public debate such as the awesomeness of US world leadership
Reminds me of Steve Walt’s ten commandments for ambitious policy wonks.

2

Andrew 08.25.09 at 11:41 pm

Isn’t part of this discussion the perennial spat between Serious People and DFHs?

There was, in fact, terrific and groundbreaking work done before the 2008 election, not least Nate Silvers 538.com, and some of the opinion poll analysis at Pollster.com.

There’s a clear distinction between people like Nate and the Pollsters who may (or may not) have an identifiable sympathy towards one political wing, but who attempt to make their methodology clear and transparent, and those who are basically engaged in rhetoric and/or spin.

Silver/538’s work was testable – he was putting up clearly falsifiable forecasts, and his final forecast wasn’t just pulled out of the air, it was backed up by months of updated analysis and data (I especially enjoyed the field reports from Republican and Democrat campaign teams in the different states. The enthusiasm gap was clear, but again there was no attempt that I could see to present the best of one side and the worst of the other).

3

Aaron Swartz 08.26.09 at 12:36 am

I think this blog post needs a link to http://wrongtomorrow.com/ — I’m frustrated that more blogs haven’t been linking to it and using it.

4

Henry 08.26.09 at 2:10 am

I hadn’t linked to it for the simple reason that I hadn’t heard of it – awesome. I need to digest this.

5

John Emerson 08.26.09 at 2:19 am

Amity Shlaes, Max Boot et al.

Lazy writing. At least twenty individuals should have been named, starting with William Kristol, James K. Glassman, and Kevin Hassett. Or you could just have written “fifty to a hundred major conservative pundits.”

6

John Emerson 08.26.09 at 3:00 am

Much of Gelman’s discussion is about right, but I think he slips when he asks whethere the results were a mandate. He seems to think that a mandate is a fact which can be known, whereas to call something a “mandate” is to propose an interpretation of the facts, and not an analysis of the facts. Those who declared that Obama had a mandate were proposing that Obama should be regarded as having a mandate, and they were not wrong in so proposing. They just failed, as much as anything because Obama did not want a mandate, since bipartisan futility was his goal all along.

Immediately after the record-breakingly narrow 2000 Presidential election George Will poo-pooed the very idea of mandate and proposed that Bush act as if he had one, and Bush was very successful in doing so. By Will’s standards Obama did have a mandate, and by everyone else’s standards he had a reasonable claim to having a mandate. But lo! — the standards for mandates had miraculously changed since 2000. If Obama had gotten 60% of the votes and 99% of the electoral votes, he still wouldn’t have had a mandate.

Which would have been fine with him, since changing things was not his goal.

7

Banned commenter 08.26.09 at 3:04 am

So once again the subject is handicapping the mean rather than policy itself: predictions rather than prescriptions, quantifications rather than values.

8

Ceri B. 08.26.09 at 7:01 am

Because I’m a hopeless geek, I was reminded both of Robert Altman’s movie (and Michael Tolkin’s book) The Player, and Warren Ellis and John Cassaday’s comic book Planetary.

From the movie: “It lacked certain elements that we need to market a film successfully.” “What elements?” “Suspense, laughter, violence. Hope, heart, nudity, sex. Happy endings. Mainly happy endings.”

I don’t have the graphic novels to hand, but the major villain in Planetary explains that he and his associates are old, powerful, and get bored, so they’re letting the heroes provide a challenge, but mustn’t get to thinking that they can actually win.

I think this is the sort of framework in which American punditry makes sense. It’s there to entertain. A good story needs some drama and surprises. But people don’t go to see romantic comedies and expect many potential couples to end miserably and apart, nor to action movies to see heroes suffer pointlessly and then die as villains triumph. Punditry serves various audiences that want to feel like there’s a basic legitimacy and reliability in American politics, and that includes a measure of real challenge but no more than a measure. Contrarianism is most welcome to the sponsors and target audiences when it ends up reaffirming how good we’ve got it.

9

Ceri B. 08.26.09 at 7:16 am

BTW, I would welcome a note on the side or in any convenient-for-moderators way about what in my last post triggered a hold for moderation. I don’t see anything spam-like in it, but that doesn’t mean it isn’t there. Thanks.

10

Barry 08.26.09 at 10:12 am

John Emerson (re – some people on the CFR)
” Lazy writing. At least twenty individuals should have been named, starting with William Kristol, James K. Glassman, and Kevin Hassett. Or you could just have written “fifty to a hundred major conservative pundits.””

How about simply ‘the entire CFR’?

11

Eric L. 08.26.09 at 12:04 pm

@6: Lazy citing. Gelman and Sides, not Gelman.

12

Barry 08.26.09 at 1:41 pm

Eric, maybe the citer didn’t want to take sides :)

13

Patrick C 08.26.09 at 4:42 pm

Insofar as Tetlock’s article, I doubt assigning precise probabilities would be particularly helpful in verifying prognostications.

To verify a probability distribution you need several, ideally many, samples. In prognostication you only get one. So if I say that a future event will occur with probability .001% and that event occurs, I can say I wasn’t wrong, it just so happened that an improbable event occurred.

I suppose we might be able to use a number of wrong probability predictions to get a broad estimate on pundit accuracy, but this would be difficult because they’re estimating a different distribution with each prognostication.

14

Billikin 08.26.09 at 6:17 pm

Don’t forget the Jeanne Dixon effect. It doesn’t matter whether you are right or wrong, just as long as you are confident. ;)

As for “the Iraq War debate” in the U. S., I missed that. Did we have one? All that happened was that the Congress officially ceded the power to declare war to the President.

15

John L. Taylor 08.26.09 at 7:23 pm

Barry @ 11: As in, a fence- citer. Ba-dum-ching!

Comments on this entry are closed.