The post below, which arose out of some discussion in my philosophy seminar last week, is a fair bit less topical than most posts on CT, but since it touches on some topics in philosophy of science and economics some people here might find it interesting. Plus I get to bash Milton Friedman a bit, but not for the reasons you might expect.
In my seminar class last week we were reading over Milton Friedman’s The Methodology of Positive Economics and I was surprised by a couple of things. First, I agreed with much more of Friedman’s view than I had remembered from last time I’d looked at it. Second, I thought there was a rather large problem with one section of the paper that I didn’t remember from before, and that I don’t think has received much attention in the subsequent literature.
Friedman was writing (in 1953) in response to the first stirrings of experimental economics, and the results that seemed to show people are not ideal maximisers. The actual experimental data involved wasn’t the most compelling, but I think with 50 years more data we can be fairly confident that there are systematic divergences between actual human behaviour and the behaviour of people typical of economic models. The experimentalists urged that we should throw out the existing models and build models based on the actual behaviour of people.
Friedman’s position was that this was too hasty. He argued that it was OK for models to be built on false premises, provided that the actual predictions of the model, in the intended area of application, are verified by experience. Hence he thought the impact of these experimental results was less than the experimenters claimed. When I first heard this position I thought it was absurd. How could we have a science based on false assumptions? This now strikes me as entirely the wrong attitude. Friedman’s overall position is broadly correct, provided certain facts turn out the right way. But he’s wrong that this means we can largely ignore the experimental results, as I’ll argue.
Why do I think Friedman is basically correct? Because read aright, he can be seen as one more theorist arguing for the importance of idealisations in science. And I think those theorists are basically on the right track. On this point, and on several points in what follows, I’ve been heavily influenced by Michael Strevens, and some of the justifications for Friedman below will use Strevens’s terminology.
Often what we want a scientific theory to do is to predict roughly where a certain value will fall, or explain why it fell roughly there. In those cases, we don’t want the theory to include every possible influence on the value. Some of these, although they are relevant to the value taking the exact value it did, are irrelevant to it taking roughly that value. In those cases, we can build a better theory, or explanation, or model, by leaving out such factors.
Here’s a concrete illustration of this (that Strevens uses). The standard explanation for Boyle’s Law – that for a constant quantity of gas at constant temperature, pressure times volume is roughly constant – is a model in which, among other things, gas molecules never collide. Now this is clearly an inaccurate model, since gas molecules collide all the time, but for this purpose, the model works, which tells us that collisions are not that relevant to the value of pressure times volume, and in particular to that value being roughly constant. Since this model is considered a good model, despite having the false feature that gas molecules do not collide, it seems in general we should be allowed to use inaccurate models as long as they work. That’s one of Friedman’s theses, and it’s worth highlighting.
- Idealised models, models that are inaccurate in a certain respect, are acceptable as long as that respect is irrelevant to the value you are trying to predict or explain.
Let’s note two more related things about the gas case. First, there’s no way to tell whether the size of the idealisation, removing all collisions from the model, is large or small by just looking at how many collisions there are. By any plausible measure, there are lots of collisions but it makes no difference to the pressure-volume product.
Second, whether an idealisation is large or small is relative to what you are trying to model. (I got this point from Michael Strevens as well.) If you’re trying to model the speed at which a gas will spread from an open container, you better include collisions in the model, because collisions make a big difference to how fast the gas spreads. Friedman makes the same point by noting that air pressure makes a big difference to how fast a feather falls, and a very small difference to how fast a baseball falls from low altitude. Let’s note this as an extra point.
- Whether an idealisation is large or small is relative to what you are trying to model.
All that I think is basically right, though it’s best to bracket issues about whether the idealisations really are small in the intended case. Let’s assume for now that there are lots of nice models that idealise away from non-maximising behaviour, and these models ‘work’ – they deliver surprising but well-confirmed predictions about economic phenomena. If so, the idealisations should be acceptable I think. The idealised models are very nice arguments that the existence of these departures from ‘perfect’ maximising behaviour is irrelevant to the phenomena being modelled.
It’s at this point that I think Friedman goes wrong. Friedman says that at this stage we have some prima facie evidence that other models using the same kinds of idealisations are also going to be correct. And this strikes me as entirely wrong. It’s wrong because it’s inconsistent with the view of the models as idealisations rather than as accurate descriptions of reality.
Note that the structure of argument Friedman is trying to use here is not always absurd. If evidence E supports hypothesis H, and the best model for hypothesis H includes assumption A as a positive claim about the world, then E is indirect evidence for A, and hence for other consequences of A. That’s what Friedman wants. He says that the success of hypotheses in other areas of economics provides indirect support for the hypothesis that there is less racial and religious discrimination when there is a more competitive labour market. I think the idea is that the other hypotheses show that people are, approximately, maximisers, so when trying to explain the distribution of discrimination we can assume they are approximately maximisers.
But it should now be clear that doesn’t make sense. Remember the very same idealisation can be a serious distortion in one context, and an acceptable approximation in another. Without independent evidence, the fact that we can idealise away from non-maximising behaviour in one context is no reason at all to think we can do so when discussing, say, discrimination. If we take Friedman to be endorsing the claim that it’s OK to idealise away from irrelevant factors, then at this point he’s trying to defend the following argument.
The fact that people aren’t perfect maximisers is irrelevant to (say) the probability that various options will be exercised.
Therefore, the fact that people aren’t perfect maximisers is irrelevant to (say) how much discrimination there is in various job markets.
And this doesn’t even look like a good argument.
The real methodological consequence of Friedman’s instrumentalism is that idealised models can be good ways to generate predictions about the economy, but every single prediction must be tested anew, because these models have little or no evidential value on their own. This conclusion might well be true, but I don’t think it’s one Friedman would want to endorse. But I think it’s what follows inevitably from his methodological views, at least on their most charitable interpretation.
fn1. Life’s too short to read all the commentaries on Friedman’s paper, so this last claim is not especially well backed up.
fn2. Some of the views I’m relying on are not published, but most of the details can be gleaned from the closing pages of this paper of Michael’s.