Selection bias

by Henry Farrell on July 13, 2003

“The Economist”: has a long article asking whether or not companies are too risk averse to take proper advantage of new opportunities and a changing marketplace. Explaining the roots of corporate advantage is tricky stuff; conventional economic theory isn’t very good at telling us when efforts to innovate are going to be successful, and when they’re not. Economic sociologists do a slightly better job, but they still have difficulty in providing useful lessons for business people. Which opens the way for all sorts of cranks and quacks, who offer dubious nostrums for business success, with all the fervid enthusiasm of a 19th century medicine show charlatan. I’m referring of course to management “theorists.”

Now, I don’t want to dismiss them all out of hand – some interesting and serious work does get done at business schools. But management theory has more than its fair share of fakers, especially towards the populist end of the market. Grand, sweeping claims are made on the basis of very iffy and dubious research and case studies which aren’t chosen for any better reason than that they can be shoe-horned into the analytic boxes prescribed by the prevailing wisdom.

Which brings us back to the _Economist_ article. It gives a guided tour of business school research on creative ways to deal with changing circumstances. The survey gives the impression that the field is a mess: different management theorists shouting bland (but somehow mutually contradictory) business prescriptions at each other. For example, one major five year research program comes to the conclusion that successful companies are distinguished from their peers by four factors – flawless execution, a company culture based on aiming high, a structure that is flexible and responsive, and a clear and focused strategy. Bloody obvious stuff in other words, which any half way intelligent senior executive could jot down on the back of a beermat in five minutes.

Why is this stuff so bad? Two possible reasons I can think of. The first is that management theory is less interested in the patient accumulation of real knowledge, than in providing training-fodder for future managers. Probably not all that much that can be done about this – b-school professors get whopping salaries for training MBAs, and are likely disinclined to change their ways. But the second reason is a problem that they could address without too much difficulty – their research methodology is often just awful.

The _Economist_ article gives an apparent example of this in its discussion of two highly influential books – _Built to Last_ by Jim Collins and Jerry Poras, and _Good to Great_ by Jim Collins on his own. Now I haven’t read these books, so I’m going on the Economist‘s description of what they have to say – it could be that they’re innocent of all the charges laid below, and are serious pieces of research. They’re almost certainly a considerable cut above the _Who Ate My Cheese_ genre of populist business nonsense. But it still sounds as though these books commit a serious social-science sin – “selecting on the dependent variable.”

What does this mean? Social scientists tend to believe that if you want to find out if a causes b by studying different cases, you need to be quite careful in choosing the cases. For example, if you want to argue that risk taking leads to business success, you want to look at cases of firms that are risk takers, and firms that are risk averse, and you also want to have cases of firms that are successful, and cases of firms that are failures. If you only study successful risk-taking firms, you’re cooking the books. It could be that there are many more risk-taking firms that are failures out there than successes – but because you’ve only chosen to look at the successes, you have no way of knowing this. You can thus end up providing pretty bad advice.

As far as I can tell from the summary in the _Economist_ , _Built to Last_ makes a mistake of just this sort, when it tries to figure out the source of business success by looking

bq. at a small sample of companies (18) that had been persistently great over time. It suggested that endurance and performance were linked.

Apparently, so does _Good to Great_ which looks at companies which spectacularly outperformed the stock-market over a three year period, and argues that the source of their success is quietly determined CEOs who believe in high standards.

The problem is that studies which concentrate on successful firms can’t tell us anything very useful about the differences _between_ successful and unsuccessful firms, because they haven’t looked at the latter at all. We don’t know whether or not other companies, which also endured a long time, had below average performance. And very likely, many did (at least that’s the conclusion of another piece of research that _The Economist_ also cites, which sounds a little more satisfactory). Equally, companies that tanked may also have had quietly determined CEOs who believe in high standards (which in any event sounds like a very hazy and subjective judgement on the researcher’s part – how can you _really_ tell the one kind of CEO from the other).

Case-studies play a big part in business-school education; they’re a useful way to make students work through the consequences of real life decisions. And they can, indeed, contribute to our more general stock of knowledge, if scholars use certain methodologies (such as ‘process tracing’) carefully. But management theory not only relies heavily on case studies – it all too often uses vague and wuffly concepts, that reflect the pre-conceived biases of the researcher rather than grounded theories. Thus its rather extraordinary noise-to-signal ratio – it’s less suited to providing new insights than new jargon to be wielded by McKinsey consultants in order to terrify staid managers into submission.



lawrence krubner 07.14.03 at 2:16 am

Also, the money splashing around inside of business and law schools distorts the social reality in which the researchers operate. Any grad student who is getting a Ph.d in philosophy must have a genuine passion for philosophy. They obviously aren’t doing it for the money. At business and law schools, the opposite presumption is reasonable.

Concerns about socials status also limit some of the informality that poverty forces on researchers, especially grad students, in many other fields.

Excellent research can be done despite concerns with social status, a lack of informality, and by people whose only interest is money. However, these factors do explain why the researchers in these fields operate in an environment so much different than most of other university research departments.


pathos 07.14.03 at 2:29 am

The problem with business school courses and their unfortunate offspring — management consultants, is unfortunately hard to determine due to its wifty terminology.

I believe, thought, that the problem can better be viewed as a weak version of what Malcolm Gladwell recently discussed in the New Yorker in the context of foreign affairs — “Creeping Determinism.”

With creeping determinism, everything become immediately obvious in retrospect.

In the 1990s, when some companies became successful by “breaking all the rules,” that became the mantra. OBVIOUSLY, the way to succeed is to not do what everyone else is doing. When the rule-breakers turned out to be law-breakers, too, it suddently became obvious that breaking all the rules was not the way to go. The rules were there for a reason, after all.

So, this generation of analysis consists of stuff that you find “bloody obvious.” This is because we’re starting with the conclusion and working backwards. You can’t do a real scientific study because what if the results show that successful firms are random? Or that they disproportionately benefit from government contracts or regulations? Or that they have a business plan that reads to close to Enron’s?

B-schools can’t do real research because of the real risk that it will be determined that teaching “management” is no different from teaching astrology.

Or maybe not. The key to good health is “eat reasonable portions, eat sufficient fruits and vegetables, and exercise regularly.” Everyone knows that. No one can debate it. But billions of dollars go into new research to determine if high carb foods cause more heart attacks than high sugar foods or some such nonesense.


Kieran Healy 07.14.03 at 3:48 am

Repeat after me, everyone: do not sample on the dependent variable.


dsquared 07.14.03 at 6:54 am

Indeed; but there are worse offenders. What about those medical schools who spent years and years ignoring the vast numbers of people who smoked cigarettes and didn’t die of lung cancer? I don’t think that sampling on dependent variables is necessarily always and everywhere a mortal sin.


Jurjen 07.14.03 at 10:07 am

The problem with any non-natural science is that it’s a lot easier to determine what not to do than it is to determine what is the right thing to do; the old saw of “we cannot determine what is true, we can only identify waht is demonstrably not true and eliminate it.”

In 1994, doing my national service, I took part in a “computer-aided exercise” (CAX); the command posts at brigade level and above were in the field, with the actual combat units only taking part in a computer simulation. My brigade, and most of the rest of the Dutch army for that matter, was (theoretically) annihilated in the exercise. I asked our operations officer why this was, and he replied that the purpose of any military exercise is to learn from your mistakes, and the best way to achieve that is by losing. To me, this made sense.

As a result, it strikes me that when one feels the urge to publish an article or book with a title like “What really works,” the best thing one can do is go and lie down until the urge passes. Better to establosh (and eliminate) what doesn’t work, methinks.

Comments on this entry are closed.