New Tools for Reproducible Research

by Kieran Healy on April 17, 2013

Clippy's Revenge

You can see this point made in somewhat more detail here.



CJColucci 04.17.13 at 3:55 pm

I’d never make a spreadsheet mistake like that — because I don’t know how to use a spreadsheet. (I’ll actually be learning in the near future, though, so maybe I can be responsible for a major screw-up.)


Gareth Rees 04.17.13 at 4:05 pm

Maybe the authors should prepare a presentation for the upcoming 14th Annual European Spreadsheet Risks Interest Group Conference?


Bruce Wilder 04.17.13 at 4:38 pm

Friends don’t let economists drive spreadsheets, while conservative: impaired judgment leads to accidents.


Sam 04.17.13 at 4:42 pm

I had one job that required me to fill in a single spreadsheet of about five cells each day. It was to our most important client. I never failed to make at least one mistake.


scott 04.17.13 at 4:49 pm

This sort of incompetence, bordering on bad faith, would be career toxin for anyone with one critical caveat: they promoted the views that our oligarchs want promoted. One of the really stunning/amusing/appalling things of the last 5 years is to see how comically stupid and self-deluded our leading economists and financial elites have been about the current crisis, committing serially moronic errors that in any other walk of life would have led to their dismissals and even humiliation, social shunning, etc. But, if those errors serve the interests of the Right People, bygones. Amazing.


peggy 04.17.13 at 5:30 pm

“From the beginning there have been complaints that Reinhart and Rogoff weren’t releasing the data for their results” wrote Mike Konczal.

Is this SOP in the social sciences? I’m a biologist- people just look at the data. Theories and discussion are usually regarded as fluff to sugar coat the data. Also, if serious people cannot replicate one’s results, all hell begins to break loose. First, one is invited to share reagents and have others visit the lab to compare methods. Occasionally the FBI or the Secret Service gets called in to confiscate every notebook and computer to decide if fraud has been committed.

As a former bench scientist, I am astounded. The number of data points is so few that I could have replicated this study using a pencil and graph paper. In biology, megabytes and terabytes are where data manipulation starts becoming an issue. One spends a year or years in the lab collecting results with some preliminary analysis. Writing the paper in a competitive field is tossed off in a few days or even overnight. Facility is helpful, but bare literacy is enough. The analysis in the paper in question would, at MIT, be assigned as undergraduate homework not puffed up as an embellishment to one’s resume.

From reading Brad DeLong, I’ve become aware that macroeconomics is a field of dubious intellectual integrity because when reality contradicts theoretical predictions, no one changes their beliefs. Now I learn that playing “Hide the Data” is reputable behavior. This is beyond appalling. This is pretend science, about as rigorous as astrology.

And I need to point out that astrology was initially based on reasonably accurate astronomical observations by the Mesopotamians. This paper is pure, self-serving fraud that doesn’t rise to the level of astrology.


Barry 04.17.13 at 5:43 pm

peggy, I don’t know if all social sciences do this, but as pointed out above, economics certainly does seem problematic.

And in every field, results of use to the elites seem to get a pass.


b9n10nt 04.17.13 at 6:16 pm

peggy, I’m sure it’s not PURE, self-serving FRAUD and historical research of course lacks controls that allow for researchers to tease-out causation. But ok, you’re general point may stand.

However, my understandign is that your first paragraph is something of a caricature of research: few labs will waste valuable time and gain grants to merely replicate someone else’s research and many papers contain errors in methodology or analysis when they are investigated. Science is messy. Only in the aggregate over many years do rigorous, robust findings persist.


P O'Neill 04.17.13 at 6:31 pm

Same thing happened to some dude named Martin Feldstein in the early 1980s. But academic rigour prevailed and he was never heard from again.


peggy 04.17.13 at 6:52 pm

b9n- I won’t disagree- science is even messier if you’ve been in the trenches looking at it, even more so the second rate stuff. That is why I wrote, “if serious people cannot replicate one’s results” because only only extremely important or astounding results get the critical attention I described. The fields I’ve worked in have been very competitive, with different teams publishing similar findings days to months apart, which does result in a desire to replicate results and questions when they can’t.

The culture of science is that one always has people looking over one’s shoulder, lab mates, advisors, competitors. In my graduate work in the 1970’s, I was taught always to write in indelible ink, and to cross out and never erase. I’m afraid that culture of permanent record may have been lost, but the primacy of data is still strong.


Colin Danby 04.17.13 at 6:53 pm

The graphic is brilliant.

Re the discussion immediately above, as far as I can tell from a quick read, all the data Reinhart and Rogoff used is public – it’s not like the results of a particular lab experiment.


Colin Danby 04.17.13 at 7:06 pm


peggy 04.17.13 at 7:26 pm

Barry-Yes, the elites get a pass.
As my husband wrote,”The actual operations of these purported intellectuals probably
don’t have all that much influence. The power relations of our societies determine policy, the epistemic justifications follow. From the 30s through the 70s, the power of the middle and industrial working classes, and the fear of some sort of socialist rebellion from them, caused our rulers to accept policies that favored those groups. But those classes’ power has faded, and with no existential threat on the horizon, the big bourgeoisie need no longer concede so much, and policies that, however well they might work, involve some redistribution of wealth and income are … well, the word “discredited” is one I hear a lot.”

Fawning for the real elites is one way to make a living. I still maintain my right to naive outrage over the misuse of “scientific” legitimacy. I’ve studied with or worked with five members of the National Academy of Sciences and watched them uphold the type of ethical standards I’m discussing. I cannot express the concentrated disdain they would have for this level of inanity and fraud.


JW Mason 04.17.13 at 7:51 pm

Is this SOP in the social sciences?

No. SOP in economics is that publication requires making all your data publicly available, at least in most journals. The thing is, this paper was never properly published — it appeared in a “proceedings” issue of AER, as opposed to a normal peer-reviewed one. One problem is that an increasing proportion of economics work relies on proprietary data that legally cannot be shared. But that wasn’t the issue in this case, they just violated the norm.

I’m a biologist- people just look at the data. Theories and discussion are usually regarded as fluff to sugar coat the data. Also, if serious people cannot replicate one’s results, all hell begins to break loose.

Peter Dorman has an interesting argument that what distinguishes science from other kinds of knowledge is the greatly disproportionate weight science places on avoiding Type I errors, compared with avoiding Type II errors. In other words, in science, unlike other fields of activity, it’s far more important to make a false claim than to ignore or overlook a true one. Of course one consequence of this is that it’s not possible to adopt a scientific approach in all areas of life — there may be no claims about economic policy that reach a standard of proof that would be satisfactory in a scientific context, but policy decisions still have to be made.

The number of data points is so few that I could have replicated this study using a pencil and graph paper

But again, this isn’t a criticism of R&R, it’s just the nature of the beast. There are only so many countries and so many years. But perhaps the sin here isn’t failing to meet the standards of science, but pretending that’s one is doing in the first place.

And I need to point out that astrology was initially based on reasonably accurate astronomical observations by the Mesopotamians.

Yes. Totally OT, but I’ve always thought there’s nothing incongruous about the fact that Isaac Newton (among many others) was both a pioneer of modern science and very interested in astrology and alchemy. Which domains of experience will turn out to be characterized by simple mathematical laws is not knowable ex ante.


John Quiggin 04.17.13 at 8:19 pm

As I read the story, RR made the data available, but not (until they were asked for it) the spreadsheet they used to do the analysis. SOP in economics requires sharing data, but has been more ambiguous about sharing code, particularly when it’s used with proprietary or home-brew software. Hopefully this episode will tighten things up a bit.


John Quiggin 04.17.13 at 8:24 pm

Worth pointing out the link to the dispute Chris mentioned recently about UK social mobility. When a policy topic is hot, a single study that gives the “right” answer can have a big impact. As JWM says, there’s no obvious answer here, since decisions about such issues have to be made at the time, not after all the evidence is in. But scepticism about convenient results is justified, whether they are convenient for us or for our opponents/


The Raven 04.17.13 at 9:08 pm

I think that, methodology-wise, the sensible thing to do is to use spreadsheets for exploration, then switch to a more formal, easily checked computer language like R for validation of the work.

It is a shame that few efforts have been made to combine these methods of computing.


medrawt 04.17.13 at 9:09 pm

but re: Quiggin at 15, from what I saw the spreadsheet is a slightly higher powered version of the sort of thing I use at my job all the time, and while I’ve put some effort into building some useful tools (that need to be checked for these sort of errors!) I wouldn’t call myself an Excel power user, and I certainly wouldn’t call anything involved “code” even if that’s the properly literal description for it, and I share my tools with the people who need to vet them. In my circumstance, as an employee of a consultant/contractor providing services for a gov’t agency, we all have access to more or less the same data, and they’re double checking our self-reported numbers regularly, and when we disagree about something that comes down to those numbers it’s a little awkward because it’s about to become apparent that someone either miscoded a spreadsheet, misread a spreadsheet, or made a mistake with some medium-difficulty arithmetic and stats.

But it would be crazy for anyone to just sit on it and say “nope, I don’t care what results you’re getting with the same data set, you must be making a mistake, and no, I’m not going to show you how I arrived at my conclusion.” Of course we’re just, in the scheme of affairs, peons doing work, not highly credentialed academics producing influential scholarship used as evidence in debating national economic policy. So I guess it was acceptable for them to just sit back for a while.


Chris Mealy 04.17.13 at 9:40 pm

Gareth Rees, for a minute I thought that was a hilarious parody site (I even laughed), but then I remembered that the world does run on Excel spreadsheets. For extra fun:


Gareth Rees 04.17.13 at 10:25 pm

The botched bidding process for the UK’s West Coast Main Line franchise (cost: at least £40 million but likely much more) is another one.


rf 04.17.13 at 11:30 pm

“then switch to a more formal, easily checked computer language like R for validation of the work.”

Slightly off topic, but once you’ve learned a computer language like R, can you then pick up another like Stata easily? I’ve been trying to get a decent answer to that question but every new answer I find contradicts the last (ie Im trying to work out which to start with..though cant afford Stata, so probable wont be that- although it’s more relevant outside of academia)


Jeffrey Davis 04.18.13 at 12:17 am

The beauty part is that the mistake doesn’t change their conclusions.


Bruce Wilder 04.18.13 at 12:36 am

Colin Danby @ 12

Mark Thoma’s views are smart, but, for my money, not nearly smart enough. What he’s saying is that, even if you have integrity, the methods economists embrace — particularly the time-series regression methods in which Thoma has made his professional investment — are too weak to reach any reliable conclusion in any finite period of time. In a thousand years, or after the discovery of a thousand similar worlds scattered across the galaxy, we might be able to settle this controversy or that. In the meantime, researchers are discarding all data before 1984!

One might begin suspect not the integrity of the data, but the integrity of the choice of method. What about the way economists choose to think about the political economy leads them to embrace homeopathic methods of discerning fact by omission, distilling out of them all realistic context? (The unfounded assumption that economic systems are stateless stands out to me.)

The tale of the faulty spreadsheet is surely an effective way to slander the work of Rogoff and Reinhart — and frankly, I think they deserve the scorn — but you didn’t need to see the spreadsheet to realize that their paper was hackish.


Barry 04.18.13 at 12:43 am

rf, r is object-oriented; the rest are command line driven (with a GUI on top).

The big question is which one you’ll need more now, and in the next few years. If you’re in a field which is SAS-heavy, then you’ll need to pick up SAS first (same for Stata, etc.).

If you can start with R, I’d do so. It’s free, and more and more packages are being written for it, to do all sorts of things.


Barker 04.18.13 at 12:53 am

re: Quiggin @ 15
I’m wondering if there have been any other incidents in your memory of an event like this “tightening things up”. Or, perhaps, how many of these events seem necessary before enough people concede that there is a problem?


rf 04.18.13 at 1:07 am

Thanks Barry


The Raven 04.18.13 at 1:36 am

rf, the knowledge of statistics is the most important thing. That will be valuable regardless of what language you do your calculations in. Beyond that using R requires the basic computer science knowledge of how to use a command line and how to write simple scripts. That is transferable, but variations in syntax between systems, though superficial, can slow the beginning student.

You ask about Stata in particular. My quick look shows that it offers a graphical programming method and that is not something that was available in R, last time I looked. So that’s something that would have to be learned. Still, it is still a statistical analysis tool, built on the the same base of mathematical knowledge as all statistical tools, and that at least will be the same.

Barry, R also has a command line, no?


derrida derider 04.18.13 at 6:57 am

Without denying the work was sloppy, I feel a bit uncomfortable about the way people are slagging off at R&R as incompetent crooks rigging results to appease great and powerful friends.

People, this was not their life’s work – it was shit pulled together for a quick paper for a conference, originally never intended to make it into a proper journal, let alone be the basis for the entire world’s strategy to fight a financial crisis. Can the academics here honestly say that all such work of their own has maintained an impeccable standard?

The real villains here are those who searched the liteature to cherrypick something they could use at a political level, and ignored the lack of replication or even proper peer review when they found something they liked.


Syd 04.18.13 at 7:48 am

No Derider, they’re hacks. This wasn’t “shit pulled together for a quick paper for a conference”. It was a
lobbyist’s polemic:

Forty United States Senators from both sides of aisle gathered for an early morning closed briefing on our greatest national security threat — our unsustainable debt….The senators sat in rows, as if students again. The presenters were renowned economists Carmen Reinhart of the University of Maryland and Kenneth Rogoff of Harvard…

Johnny Isakson, a Republican from Georgia and always a gentleman, stood up to ask his question: “Do we need to act this year? Is it better to act quickly?”

“Absolutely,” Rogoff said. “Not acting moves the risk closer,” he explained, because every year of not acting adds another year of debt accumulation. “You have very few levers at this point,” he warned us….

Reinhart echoed Conrad’s point and explained that countries rarely pass the 90 percent debt-to-GDP tipping point precisely because it is dangerous to let that much debt accumulate. She said, “If it was not risky to hit the 90 percent threshold, we would expect a higher incidence.”…


Bruce Wilder 04.18.13 at 5:15 pm

derrida derider: this was not their life’s work – it was shit pulled together for a quick paper

You do realize they wrote a book, This Time is Different, which was given the full PR treatment, in 2010, complete with medals and nominations and inclusion on book lists and hyperbole gathered in from all quarters? The book is not nearly as hackish as the paper, though it was decidedly mediocre. Rogoff has, indeed, made a career out of being a reliable validator of whatever it is the powers-that-be want validated. Rather famously he was the point man for the defense, when Stiglitz turned on the IMF. He was a natural for the role of apologist for austerity. Oh, and he’s available to advise Republican Presidential candidates, whenever Mankiw is unavailable or already taken.

In economics, the Right is lousy with conviction, but lack integrity; the Center lack conviction and integrity; what passes for the Left, have integrity, but no conviction. So, pick your poison. They’re all incompetent boobs. The only question is what brand of incompetent boob, and who is paying for the marketing hype.


Ronan(rf) 04.18.13 at 6:01 pm

Thanks Raven, that’s very helpful as well. I’m thinking of the most general, basic level at the moment, so if most of that is transferrable, then R seems like the best option


polyorchnid octopunch 04.19.13 at 10:37 am

Raven, Barry, and rf: I don’t know R specifically, but I do know programming. There are three main paradigms in programming: imperative, object, and functional programming. They all have strengths and weaknesses. Generally, learning the first language in any particular paradigm is difficult; learning further languages gets easier as you accumulate them.

Now, this is somewhat contingent; while pedagogical languages are strictly limited to one of the paradigms, practical languages in the real world will generally be amenable to using all three techniques. Furthermore, one can (through some careful use of your internal nomenclature for variables and methods and functions imperative and functional) use one language to program in another paradigm, and any well-educated programmer will have been made to do at least one assignment (in my case, during the main compiler course at my uni) to program in a different paradigm than the one the language was designed for.

For everyone in social or hard sciences that are doing a lot of data analysis, getting a good understanding of how the three paradigms work is (imho) crucial, while acquiring some specific skills in all three of them is undoubtedly important. If nothing else, it will permit you to at least assess the validity of the approach used in a program (lies, damned lies, statistics, and entertainment softwarepolicy advocacy).

rf, what you’re doing in your spreadsheet is coding. The fact that there’s a very human friendly front end on it and that each of the bits you’re putting together is simple doesn’t make it not coding. In fact, simple is good coding; at the bottom of good software dev is taking a complicated problem, breaking it into its simplest components, coding each of those components, and then putting them all back together again to solve the problem.

A quick google of R shows that it’s amenable to techniques from all three paradigms. If you’re looking to get a good understanding of how to use it, you should probably start with learning a pedagogical language for each of them. REXX is an old IBM imperative language with a very natural syntax and is a good introduction to those languages. Regina is a free and open source implementation of that language. My first language was Object REXX which is an OO extension to REXX and is quite easily comprehensible, but crosses paradigms quite routinely; perhaps Moodle would be a better choice for you. Finally I got functional programming from Haskell.

One last note: make sure you realise that there is a difference between a language an implementation of a language! The question of whether there’s a graphical front end (such as Excel or the SAS language mentioned above) is a question of implementation rather than one about what kind of language it is. C is a language (and is clearly defined, with its definition undergoing ongoing revision: Kernighan and Ritchie, Ansi C, C99, etc); the compilers gcc, clang, icc, openwatcom, Microsoft Visual C, etc are implementations of the language. Specific implementations don’t ever really make it all the way to perfect conformance to the language spec. But hey… they’re not bugs, they’re features! ;)

….aaaand that’s probably enough pedagogy for now :)


Tim Wilkinson 04.19.13 at 4:03 pm

Derrider @28: Without denying the work was sloppy, I feel a bit uncomfortable about the way people are slagging off at R&R as incompetent crooks rigging results to appease great and powerful friends…The real villains here are those who searched the liteature to cherrypick something they could use at a political level, and ignored the lack of replication or even proper peer review when they found something they liked.

Is that what you mean by a ‘wishy-washy post-structuralist explanation’ then, or is it just the plain cock-up theory that it appears to be?

Either way, it seems to have been blown quite spectacularly out of the water by Syd and shot down in flames by Bruce. Looks like you may need to stop relying on an exaggerated and emotional aversion to ‘conspiracy theory’ in evaluating factual theses.

To be fair, it’s a common enough failing (a failing, that is, if you’re trying to be fact-sensitive rather than clubbable). I have to say, I think I detect a trace of something similar in Bruce’s final, obiter, paragraph:

In economics, the Right is lousy with conviction, but lack integrity; the Center lack conviction and integrity; what passes for the Left, have integrity, but no conviction. So, pick your poison. They’re all incompetent boobs. The only question is what brand of incompetent boob, and who is paying for the marketing hype.

– which look a bit like an attempt to mitigate the vulgarly conspiracist nature of the facts.

A neat use of abstract and systematic categories mitigates vulgarity, while conspiracism is almost entirely repudiated, in despite of previous observations. First an imputed ‘conviction’ is married to the manifest ‘lack of integrity’, spawning such tenuous possibilities as noble cause corruption or even merely cognitive bias. Then, we go all the way to ‘incompetent boobs’ as a description of the guy who seems to be doing very nicely out of his role as validator, apologist and point man, thank you very much, the only downside being that people he doesn’t give a toss about write sarcastic comments about him on the web.

Still a trivial failing compared to poor old derr/ider, who even when he’s trying to construct a hyperbolic conspiracy theory to illustrate what he’s ‘uncomfortable’ about, still seems impelled to specify that the alleged ‘crooks’ are allegedly ‘incompetent’.

…and that’s probably enough didacticism for now.


Bruce Wilder 04.19.13 at 8:02 pm

I don’t know that I was trying to mitigate anything. I had an epigram, and when you have an epigram, you have to use it, no? Just like you have to eat good chocolate, when you have it, or what’s the point?

Casting a pox on all houses is a cliché, of course, and not, by itself, a particularly enlightening one, but I thought the alliterative counterpoint had something to recommend it, though I did not commit myself on where Reinhart & Rogoff fall in my tripart division. (Corrupt Center, lacking conviction or integrity, if you are curious.)

The Mark Thoma comment, which Colin Danby pointed to, @ 12 above, is the voice of the soi disant center-left liberals — those who have integrity, but lack conviction — those, who would be inclined to elevate the quality of the political discourse with the gleanings of academic drudgery, but find their methods fall short, alas and alack. JW Mason @ 14 wants to double-down on this conviction-less integrity:

. . . this isn’t a criticism of R&R, it’s just the nature of the beast. There are only so many countries and so many years. But perhaps the sin here isn’t failing to meet the standards of science, but pretending that’s one is doing in the first place.

I despair at this attitude, and the general weakness on the center-Left and Left that it exposes. This was my target. Not a pox on all houses, but the observation that goodwill has no effective friends.


Martin Bento 04.20.13 at 5:02 am

Are there no consequences in academia for this sort of thing? I know these guys will see their reps take a hit, but, considering how much the academy lives off its credibility, doesn’t it seem something more concrete would be called for?

It’s similar to when Malholtra and Margalit, of Stanford, claimed in a popular magazine to have research showing Democrats to be more antisemitic than Republicans – claims that were parroted in the national media by the likes of Bill Kristol. The article was examined right here on this site, and, among other things, it came out that this research had not been peer-reviewed, though the article did not make this clear. When pressed on this, Malhotra claimed that peer review was probably becoming outdated anyway – weren’t all us commenters on the Internet providing plenty of review? But he still wouldn’t let us see his data so as to perform this grassroots review on the grounds that his data (wait for it) had not yet been peer-reviewed. Then he quickly exited the thread. Later, Margalit would say that the statistical claims of the article “should not be taken literally”. Statistics are supposed to be math, not myth; as metaphors they are meaningless.

AFAIK, no peer-reviewed version of this research, supporting these claims, ever appeared (someone please let me know if I’m wrong). Malhotra is not individually famous, so it seems to me unlikely that this calumny would ever have been published by a reputable source like the Boston Review if not for the Stanford imprimatur.

Shouldn’t Stanford be concerned about this? Isn’t this abuse of their reputation? Is this a legitimate thing for a tenure committee to look at?

And the same applies to the Universities that employ these two guys. What, if anything, should be the institutional response when researchers appear to have promoted false or misleading claims about their own research to further a political agenda? I assume there is one in cases of clear fraud, but we seem to be here in a domain that probably is not legal fraud, but nonetheless seems to be deliberate misdirection. If they claim it was actually inadvertent, shouldn’t they have to explicitly argue that in a venue of consequence and have that argument evaluated? Do people think public shaming is sufficient? Perhaps it is, but you can’t shame the shameless.


Bruce Wilder 04.20.13 at 8:11 pm

Are there no consequences in academia for this sort of thing?


It is hard not to be cynical, but I’ll try. Stanford has survived the Hoover Institution all these years. George Mason University and its Mercatus, has done very well funded as part of the Kochotopus. John Yoo is still at Boalt, for sins far more egregious than any ever committed by any mere economist, let alone these economists. The reputation of the institution (and, if they know how to work it, their fundraising potential) is enhanced when their faculty members enter public service at a high level, and everything in academia, including but scarcely limited to the conventions of tenure, is structured to protect the (senior or senior super-adjunct, not the junior — very important!) faculty member in public service, from the political risks. The institution is very self-consciously providing a safe harbor (and selling to donors the ability to create such safe harbors) for high-flying policy entrepreneurs and refugees. Disciplining faculty is something administrators are unlikely to do, except at the behest of substantial donors, and even then, are unlikely to advertise such discipline as related to standards of scholarship, except when that suits the donors. And, if it does suit the donors, the administrators may well take some reputational risk (and violate the spirit of tenure) with the community of scholars, who have notoriously short memories and little real leverage in the long-run). Notre Dame dumped their unconventional economists and Yale, its unconventional business school faculty, when it suited right-wing political interests.

The potential social sanctions of professional associations and communities, formal and informal, where one might hope standards of scholarship might count for more, of course, are more diffuse, and not so much in the hands of school administrators. It is here, where the degenerative state of economics, particularly the subfields of macroeconomics and financial economics, comes into play.

Not to repeat my own epigram, but righteousness requires both integrity and conviction, and that is not on offer in mainstream policy economics. To see what Rogoff and Reinhart have done as reprehensibly false and deceptive, you’d have to believe that useful and contrary knowledge is both possible and extant, fully supportable by accepted methods and results of scholarship. You’d have to be confident that true testimony about what economics can and does tell us, supported by vetted scholarship, would contradict Rogoff and Reinhart.

The tide has gone out in macroeconomics and financial economics, and I think most economists would regard the kind of knowledge Rogoff and Reinhart aspired to provide, as a marker on a distant and dry horizon, beyond the navigable reach of economics mariners. The principal results of financial economics and macroeconomics over the last 40 or 50 years have been null results: conclusions about what economists cannot know, and why they cannot know it. The Modigliani–Miller theorem, the Efficient Markets hypothesis, the Lucas Critique — these are null results (and foundation stones for what has followed!), and they justify, on the Right, an arrogant contempt for claims of positive knowledge (especially in violation of laissez faire policy), and on the Left, deep doubt and self-castration. The closest you can get to self-confident power in addressing the profession on policy is the careful jiu jitsu of Larry Summers, who will attack the “ketchup economics” of financial economics for its emptiness, but turns around and endorses laissez faire, when it suits his paymasters in the financial sector.

Harvard will chuck Larry Summers out for corruption and incompetence about a century after Columbia shows the notoriously corrupt and hackish Glenn Hubbard the door, and Larry Summers has cost Harvard at least several tens of millions of dollars, while Hubbard has been the loyal servant of some of Columbia’s most generous fund-raisers and donors. But, I digress toward cynicism, I fear. Let me return to how economists are likely to see the sin of Rogoff and Reinhart as venial at worst.

The null results, which have assumed such prominence in shaping the research projects of macro and financial economics are not wrong, imho, but they have, because suitable and survivable antitheses have not emerged in a timely way, contributed to making those research projects degenerate. Macroeconomics — the community of scholars working in mainstream macroeconomics, today — knows less about the political economy than it did 50 years ago. Krugman, from his perch on the New York Times op-ed pages, and with a Nobel earned in international trade theory (where, incidentally, he helped a field, which had suffered severe degeneration, “discover” what earlier generations of scholars had known about implications of increasing returns), operates from an antique macroeconomics, in no way more sophisticated than what I learned in Gardner Ackley’s Intermediate Macro at the University of Michigan in the early 1970s, but nevertheless superior to its 21st century contemporaries. On this sadly thin basis, he plays the one-eyed man in the kingdom of the blind.

The comment of Mark Thoma, linked to by Colin Danby @ 12 above, and echoed in JW Mason’s comment, is quite elegant and accurate. Thoma has been making and refining this view for several years, and, as it pertains directly to his own scholarly research, it is almost poignant. Tyler Cowen at Marginal Revolution, produced a very telling reaction. (Keep in mind that Tyler Cowen is a medium-price whore working in a fancy brothel, not an idealist like Thoma, who would probably wear Adidas to a meeting with the local business community, if his chairman didn’t chide him.) Tyler framed his take as reflections on the hazards of being famous and a policy scholar, and elaborated further with this addendum,

There is a genuine tension between becoming (and staying) “famous” and expressing all the appropriate levels of agnosticism on issues, which fairly often ought deserve quite an extreme agnosticism (see Mark Thoma on this). It is hard to do both, and you can see this tension in the writings of most if not all well-known economists, at least in their more public pronouncements. In the “good old days” that tension could be elided. Academic discourse took place at relatively closed seminars, no quick responses were required, word traveled slowly, back and forth was much less rapid, and in general transparency was lower all around.

I’ve seen the Reinhart and Rogoff book in airports around the world, even though it is to most people unreadable or at best boring. Could they have still made a splash if they had changed the title to This Time is Different: Why Inference from Macroeconomic Data is Really, Really Hard? I don’t think so.

It is hard to imagine a more direct acknowledgement of how the null results I mentioned have dominated the “thinking” of economists, and promoted the kind of solipsistic skepticism Tyler praises as “agnosticism”. It is worth noticing how Tyler’s pity for Rogoff and Reinhart works to flatter the less-than-famous, neutralizing the jealousy and envy that might otherwise fuel a more hostile community response from scholars eager to promote more well-founded views (if there were any). Krugman has complained that Tyler is wrongly saying that public intellectuals have to pander (he’s saying this to Tyler, head of the Koch-funded Mercatus!). But, of course, Tyler is right: public intellectuals, to get the attention that makes them famous, do have to find and keep an audience, just as political leaders sometimes have to fight to stay out in front of a mob, to maintain the credibility of their claims to the role of leader. In this case, being the spokes-model economists for what Power is determined to do, requires a kind of fashion-sense, offering the Emperor not invisible clothes as in the fable, but clothes the Emperor finds convenient and appropriate to dress up in, given what He is determined to do.

We might wish academic economists — at least some of the brave ones — would want to speak Truth to Power, in the immortal phrase, and would be able to use the safety of academic appointment to bolster their courage as well as their credibility, but the history of recent scholarship has not generated any suitable confidence that they have any Truth to speak. Rogoff and Reinhart wrote a work of institutional economic history, something of which very little has been done in the mainstream, at least since 1940 (!), to prepare the groundwork for their little paper, and the many op-ed précis they derived from it. As Thoma points out, the methods of macro encourages the widespread practice of ignoring everything that happened prior to 1984. Rogoff and Reinhart were the sum total of the scholarship, and Kindleberger certainly wasn’t going to rise from the grave.

Even by the standards of their own scholarship, the paper and op-eds were pretty hackish, but, as I said earlier, any fair reader could see that without inspecting their speadsheets. Finding fault with Excel is a clever critique, rhetorically and politically, since it affirms the possibility of sound scholarship, and reverses the implied policy imperatives, which is what is being contested, politically. But, academic economists are going to see what Rogoff and Reinhart did as hubris not fraud, because mainstream scholars are in a place, where they simply do not believe that claims of the kind (never mind the substance) Rogoff and Reinhart made, are possible, based on evidence, given the methods of analysis and interpretation to which the community is committed. You can see the difference, I think, in Krugman’s several comments: policy-wise, he’s glad to see the credibility of austerity take a hit, but as a scholar, he’s been reluctant to be too hard on his fellow MITer, Rogoff.

Krugman’s own position, the “cyclical, not structural” therefore stimulus refrain, is as indefensible and wrong-headed as Rogoff’s apology for austerity, and only slightly less over-simplified. Neither view is intellectually admirable, though Krugman’s may be marginally more sympathetic to mass suffering. And, the same clubbiness thing that distorts Krugman’s views of his old colleague, Bernanke, comes into play: he makes unfounded assumptions about sharing a consensus on some Olympian level with his fellow, elite economist, and even when those assumptions become untenable, when he descends down the mountain, he finds it hard to speak clearly to the hoi polloi about what the disagreements with his elite-economist critics and opponents are about. The prominent place of null results in the common foundations of mainstream (macro and financial) economics has, sometimes, actually made it difficult for economists to even know what it is they disagree about in the high aerie of abstract theory, let alone to recognize those disagreements, when they are manifest down in the policy mudflats left bare by the ebbing tide of their shared degenerative research project.


derrida derider 04.22.13 at 3:04 am

Tim, you’ll be pleased to know that having read around a bit (including your link) I am now more disposed to think that R&R are less than intellectually honest, as well as being careless. Which is dumb on their part – notoriously liars need to be very careful about getting their story straight while honest people can afford inconsistency and bewilderment.

But I’m unrepentent on your other stuff – you WERE positing a sinister conspiracy based on little or no evidence.


rf 04.22.13 at 3:36 pm

Nice one polyorchnid octopunch, thanks very much! I’ve had a look at REXX and it looks like an ideal starting point


David 04.22.13 at 11:34 pm

rf: if you do try R, try using it with the RStudio front end. it’s much easier than the default console interface. see here for a video:

somewhat on topic, R also has genuine tools for reproducible research which will store your code and text in the same place, from which it can generate PDFs with graphs, tables, etc.


faustusnotes 04.23.13 at 3:32 pm

rf, if I may can I add a vote against R? It’s hard to learn, it has idiosyncratic handling of vectors, the error messages are literally incomprehensible, it can’t be trusted, and the online help (through the forums) is incredibly rude. The worst aspect of R is that it can’t be trusted: it has problems with linear prediction, Andrew Gelman has identified convergence problems in its logistic regression functions, in the past it had problems calculating p values for time series analysis, and in general you just don’t know what it’s doing. Andrew Gelman (I think) once reported on some package (nlme4 I think) that has been unstable for so long that they have had to use scientific notation for its version numbers. It’s also very RAM-hungry, so if you’re using big files it’s a disaster, and it isn’t designed to keep up with computing hardware.

I know this next statement isn’t going to make me any friends here, but here goes: it is written by amateurs and it has no quality control. Statisticians who develop code for R are not programming experts.

this isn’t to say it isn’t useful – R can be an excellent package when used for hte right tasks, and I use it often. But there are other packages that are more trustworthy. If you want trustworthy statistical analysis with decent graphics, use Stata; if you want decent matrix maths but aren’t fussed about tailor-made stats procedures, use matlab; if you think you won’t be able to go past menu driven functions then use SPSS; if you need to mesh with ODBC systems and legacy stuff, use SAS. If you want automation and you know its nuts and bolts very, very deeply – so you can trust your ability to work around its problems – then use R.

I think statisticians should be able to use all these packages and do whatever they need to in any of them, but you need to know their flaws and advantages, and as others here have said the most important thing is knowing the underlying statistical theory. But basically you can’t trust R to implement that theory in a robust way, because it’s open source, and open source has no quality control. My personal preference is for Stata for stats, and matlab for matrix maths and graphing. Stata is particularly good on modern hardware because it is optimized for multiple processors, which as far as I know is still a pipe dream for R. So I wouldn’t waste your time on R unless automation is your shtick – and then you best be very, very careful about the results you get.

Unless of course money is an issue – then R is the go, and provided you’re careful you’ll be fine!


rf 04.23.13 at 5:46 pm

Thanks David

“Unless of course money is an issue – then R is the go, and provided you’re careful you’ll be fine!”

You got it there faustusnotes

I think I’d go for Stata if it wasn’t for the price, (though actually paying for the thing might work as an incentive to learn it properly), but I’m pretty broke (and entitled – why the hell should I pay for anything!)

Your points are well taken though. That’s what I’ve been noticing when reading about the different languages – there’s always an ‘R is great but….’ And then, generally, I don’t really understand what the qualification is, so can convince myself I’ll be able to work around it..which is the problem (for people like me, with little knowledge of computer languages, stats etc) when listening to Gelman et al, they’re speaking to people who understand the problems and can make allowances for them..

And then there’s the whole panalopy of free extras with R; (books, courses, tutorials etc..)It’s really very difficult to make a rational decision under such circumstances!


faustusnotes 04.24.13 at 1:22 am

rf, if you want automation, Stata is really fiddly to learn. Overall I think it offers the best combination of R’s power and SPSS’s convenience, but for automation it’s a tad tricksy. If you are working with large datasets it is worth it: Stata plus a good computer costs about the same as SAS plus a crap computer, but performs much, much better with large data.

The other problem with R – and I think it can’t be understated – is that the language is really pithy and hard to learn. It’s a steep learning curve and some of it is counter intuitive or at least kind of brain bending. In this regard Stata is more straightforward (and commensurately less flexible).

But if you’re aiming to publish, and want to avoid rookie errors, I would recommend the investment in a stats package with decent help and no insider tricks. For example, at some point you’ll learn how to handle contrasts, and probably the text book you learn from will teach you a different way of coding contrasts to R’s default. You probably won’t realize what R uses until you’ve made a silly error, because R’s documentation is woeful. That sort of error is fine if you’re just playing around, but it can have serious professional ramifications if you get it wrong in published work (unless you’re an economist!)

Comments on this entry are closed.