Idealisations in Economics

by Brian on March 3, 2004

The post below, which arose out of some discussion in my philosophy seminar last week, is a fair bit less topical than most posts on CT, but since it touches on some topics in philosophy of science and economics some people here might find it interesting. Plus I get to bash Milton Friedman a bit, but not for the reasons you might expect.

In my seminar class last week we were reading over Milton Friedman’s __The Methodology of Positive Economics__ and I was surprised by a couple of things. First, I agreed with much more of Friedman’s view than I had remembered from last time I’d looked at it. Second, I thought there was a rather large problem with one section of the paper that I didn’t remember from before, and that I don’t think has received much attention in the subsequent literature.[1]

Friedman was writing (in 1953) in response to the first stirrings of experimental economics, and the results that seemed to show people are not ideal maximisers. The actual experimental data involved wasn’t the most compelling, but I think with 50 years more data we can be fairly confident that there are systematic divergences between actual human behaviour and the behaviour of people typical of economic models. The experimentalists urged that we should throw out the existing models and build models based on the actual behaviour of people.

Friedman’s position was that this was too hasty. He argued that it was OK for models to be built on false premises, provided that the actual predictions of the model, in the intended area of application, are verified by experience. Hence he thought the impact of these experimental results was less than the experimenters claimed. When I first heard this position I thought it was absurd. How could we have a science based on false assumptions? This now strikes me as entirely the wrong attitude. Friedman’s overall position is broadly correct, provided certain facts turn out the right way. But he’s wrong that this means we can largely ignore the experimental results, as I’ll argue.

Why do I think Friedman is basically correct? Because read aright, he can be seen as one more theorist arguing for the importance of idealisations in science. And I think those theorists are basically on the right track. On this point, and on several points in what follows, I’ve been heavily influenced by Michael Strevens, and some of the justifications for Friedman below will use Strevens’s terminology.[2]

Often what we want a scientific theory to do is to predict roughly where a certain value will fall, or explain why it fell roughly there. In those cases, we don’t want the theory to include every possible influence on the value. Some of these, although they are relevant to the value taking the exact value it did, are irrelevant to it taking roughly that value. In those cases, we can build a better theory, or explanation, or model, by leaving out such factors.

Here’s a concrete illustration of this (that Strevens uses). The standard explanation for Boyle’s Law – that for a constant quantity of gas at constant temperature, pressure times volume is roughly constant – is a model in which, among other things, gas molecules never collide. Now this is clearly an inaccurate model, since gas molecules collide all the time, but for this purpose, the model works, which tells us that collisions are not that relevant to the value of pressure times volume, and in particular to that value being roughly constant. Since this model is considered a good model, despite having the false feature that gas molecules do not collide, it seems in general we should be allowed to use inaccurate models as long as they work. That’s one of Friedman’s theses, and it’s worth highlighting.

  • Idealised models, models that are inaccurate in a certain respect, are acceptable as long as that respect is irrelevant to the value you are trying to predict or explain.

Let’s note two more related things about the gas case. First, there’s no way to tell whether the size of the idealisation, removing all collisions from the model, is large or small by just looking at how many collisions there are. By any plausible measure, there are __lots__ of collisions but it makes no difference to the pressure-volume product.

Second, whether an idealisation is large or small is relative to what you are trying to model. (I got this point from Michael Strevens as well.) If you’re trying to model the speed at which a gas will spread from an open container, you better include collisions in the model, because collisions make a __big__ difference to how fast the gas spreads. Friedman makes the same point by noting that air pressure makes a big difference to how fast a feather falls, and a very small difference to how fast a baseball falls from low altitude. Let’s note this as an extra point.

  • Whether an idealisation is large or small is relative to what you are trying to model.

All that I think is basically right, though it’s best to bracket issues about whether the idealisations really are small in the intended case. Let’s assume for now that there are lots of nice models that idealise away from non-maximising behaviour, and these models ‘work’ – they deliver surprising but well-confirmed predictions about economic phenomena. If so, the idealisations should be acceptable I think. The idealised models are very nice arguments that the existence of these departures from ‘perfect’ maximising behaviour is irrelevant to the phenomena being modelled.

It’s at this point that I think Friedman goes wrong. Friedman says that at this stage we have some prima facie evidence that other models using the same kinds of idealisations are also going to be correct. And this strikes me as entirely wrong. It’s wrong because it’s inconsistent with the view of the models as idealisations rather than as accurate descriptions of reality.

Note that the structure of argument Friedman is trying to use here is not always absurd. If evidence E supports hypothesis H, and the best model for hypothesis H includes assumption A as a positive claim about the world, then E is indirect evidence for A, and hence for other consequences of A. That’s what Friedman wants. He says that the success of hypotheses in other areas of economics provides indirect support for the hypothesis that there is less racial and religious discrimination when there is a more competitive labour market. I think the idea is that the other hypotheses show that people are, approximately, maximisers, so when trying to explain the distribution of discrimination we can assume they are approximately maximisers.

But it should now be clear that doesn’t make sense. Remember the very same idealisation can be a serious distortion in one context, and an acceptable approximation in another. Without independent evidence, the fact that we can idealise away from non-maximising behaviour in one context is no reason at all to think we can do so when discussing, say, discrimination. If we take Friedman to be endorsing the claim that it’s OK to idealise away from irrelevant factors, then at this point he’s trying to defend the following argument.

bq. The fact that people aren’t perfect maximisers is irrelevant to (say) the probability that various options will be exercised.
Therefore, the fact that people aren’t perfect maximisers is irrelevant to (say) how much discrimination there is in various job markets.

And this doesn’t even look like a good argument.

The real methodological consequence of Friedman’s instrumentalism is that idealised models can be good ways to generate predictions about the economy, but every single prediction must be tested anew, because these models have little or no evidential value on their own. This conclusion might well be __true__, but I don’t think it’s one Friedman would want to endorse. But I think it’s what follows inevitably from his methodological views, at least on their most charitable interpretation.

fn1. Life’s too short to read all the commentaries on Friedman’s paper, so this last claim is not especially well backed up.

fn2. Some of the views I’m relying on are not published, but most of the details can be gleaned from the closing pages of this paper of Michael’s.

{ 1 trackback }

catallaxy » Blog Archive » Try on some new stereotypes
08.01.05 at 8:44 am

{ 32 comments }

1

Carlos 03.03.04 at 2:25 am

The standard explanation for Boyle’s Law – that for a constant quantity of gas at constant temperature, pressure times volume is roughly constant – is a model in which, among other things, gas molecules never collide.

Um. The kinetic theory for an ideal gas most certainly has collisions: with the walls of an idealized container. This is how the theory explains pressure, as the summation of all the molecular collisions against it.

Checking Strevens’ original paper, he makes the same mistake:

It represents molecules as not colliding (since they are infinitely small),
when in fact they do collide,

Basic kinetic theory makes the simplifying approximation that gas molecules do not collide with *each other*. Not that they never collide.

It’s probably just a whoopsie on Strevens’ part, and doesn’t change the thrust of his argument, as far as I can tell. But there was definitely a “yeesh” on my part when I read that sentence.

C.

2

Tom Slee 03.03.04 at 3:16 am

I have written (unpublished) something along the same lines — I even used the ideal gas as an illustration. You can actually pursue the analogy a little further.

Most accurate theories of gases use the ideal gas equation as a starting point, and add corrections essentially as an afterthought to account for its inaccuracies. This approach is commonly called a “perturbation theory”, and it takes the form of a converging series.

But just because series converge sometimes, doesn’t mean they always converge. For example, if you want to understand liquids, you can’t start with the ideal gas law and correct. It just doesn’t work, because the starting point is qualitatively wrong. In liqudes, the interactions become too important, they become crucial to the matter at hand. You can’t add them on as an afterthought.

There is one place where I do differ from you, in emphasis at least. There are two assumptions in the usual “competitive market” view of the world – first, that people are utility maximizers, and second, that these utilities are independent. Once you have externalities, this second assumption falls apart. So, for example, my utility for acquiring a status good is affected by whether you have also acquired the good.

My own feeling is that it is this second assumption, the interdependence of utilities, that is where most extrapolations of market ideas to other areas fall down, rather than the utility-maximizing assumption.

3

Brian Weatherson 03.03.04 at 3:23 am

Carlos, right I should have said don’t collide *with each other*. Of course collisions with the container walls are *very important* in the model. My mistake.

4

someone 03.03.04 at 4:05 am

To Carlos: The term “collide” virtually always means that both objects involved in the collision are moving. You don’t say “the car collided with that barrier”…you say “the car _crashed_ into that barrier.” So your suggested correction is really overly pedantic and redundant.

5

Carlos 03.03.04 at 4:52 am

To Carlos: The term “collide” virtually always means that both objects involved in the collision are moving. You don’t say “the car collided with that barrier”…you say “the car crashed into that barrier.” So your suggested correction is really overly pedantic and redundant.

“How much force do we have to apply to balance the banging of the molecules? The piston receives from each collision a certain amount of momentum. […W]ith each collision we get a little more speed, and the speed thus accelerates. […] So we see that the force, which we already have said is the pressure times the area, is equal to the momentum per second delivered to the piston by the colliding molecules.” — Richard Feynman, the Red Book, vol. 1, 39-3.

So your suggested correction is really overly pedantic and redundant.

I love the smell of psychological projection in the evening.

C.

6

John Quiggin 03.03.04 at 5:14 am

In my view, the big problem with Friedman’s defence of the optimisation postulate is that it’s a totally inaccurate description of the actual status of this postulate in neoclassical (particularly Chicago) economics.

Given Friedman’s instrumentalist view, we should dump optimisation whenever another assumption gives us better predictions (Friedman doesn’t explicitly discuss parsimony, but he might want to say “another equally simple assumption”).

But this isn’t what economists do at all. Given a phenomenon that could easily be modelled by assuming suboptimal behavior, they will strain mightily to find an optimising explanation – the more recondite the explanation, the greater the professional status for finding it.

The actual status of the optimisation hypothesis in mainstream economics is much better explained by Lakatos’ core-periphery model of Scientific Research Programs than by Friedman’s instrumentalism.

This has changed a bit over the past two decades, with the rise of behavioural economics, non-expected utility and so on, but it still characterizes a lot of the profession.

7

Bob McGrew 03.03.04 at 8:23 am

Surely there is one way that we can apply the success of an idealization in one area to strengthen its application in the other without additional evidence: if we have an explanation for why the idealization works in the first area and that explanation still applies in the second area.

(In the examples that follow: I haven’t done statistical mechanics since undergrad, and I haven’t read the Friedman paper, so I may be butchering these examples. Nevertheless…)

For instance, we can justify the no-collisions assumption in our ideal gas model by noting that collisions don’t change the energy or momentum much, and, since they scatter the particles randomly, they don’t much affect the average number of particles which hit the sides during a unit time. Thus, we can use the no-collisions idealization whenever our results depend on averages of hits against a boundary, without additional evidence. (And, we notice without evidence that our gas diffusion is ruled out.)

In Friedman’s case, we can argue that (say) markets for goods will clear at an equilibrium even in the presence of suboptimal behavior because prices are set by the marginal trader, rather than by every trader. And we then argue that the marginal trader is rational because rational traders will do better and drive the others out of business.

Thus we could argue that the same justification for the idealization holds in markets-for-employees case as in the markets-for-goods case: that the marginal employer is rational because rational employers will do better and drive the others out of business for the same reason as above.

Now, I’m not sure that’s a good idealization that explains what is going on in the economic setting. But, if that idealization hypothesis has good predictive power, I think that would be a prima facie argument for using that hypothesis in other circumstances where its premises were true.

8

Bob McGrew 03.03.04 at 8:24 am

Surely there is one way that we can apply the success of an idealization in one area to strengthen its application in the other without additional evidence: if we have an explanation for why the idealization works in the first area and that explanation still applies in the second area.

(In the examples that follow: I haven’t done statistical mechanics since undergrad, and I haven’t read the Friedman paper, so I may be butchering these examples. Nevertheless…)

For instance, we can justify the no-collisions assumption in our ideal gas model by noting that collisions don’t change the energy or momentum much, and, since they scatter the particles randomly, they don’t much affect the average number of particles which hit the sides during a unit time. Thus, we can use the no-collisions idealization whenever our results depend on averages of hits against a boundary, without additional evidence. (And, we notice without evidence that our gas diffusion is ruled out.)

In Friedman’s case, we can argue that (say) markets for goods will clear at an equilibrium even in the presence of suboptimal behavior because prices are set by the marginal trader, rather than by every trader. And we then argue that the marginal trader is rational because rational traders will do better and drive the others out of business.

Thus we could argue that the same justification for the idealization holds in markets-for-employees case as in the markets-for-goods case: that the marginal employer is rational because rational employers will do better and drive the others out of business for the same reason as above.

Now, I’m not sure that’s a good idealization that explains what is going on in the economic setting. But, if that idealization hypothesis has good predictive power, I think that would be a prima facie argument for using that hypothesis in other circumstances where its premises were true.

9

Barry 03.03.04 at 1:14 pm

John, a comment on the assumption of ‘optimization’ in economics:

I think that that is what defines economics from the other social sciences. I knew a woman in college whose first comment on many things was “that’s stupid”. We’d explain that many such things weren’t stupid, but had good reasons (although frequently evil reasons).

I’ve felt that that is what economics is defined as doing – never accepting the “that’s stupid” hypothesis, but always looking for some rational explanation.

That’s the power, but also a limitation.

10

nnyhav 03.03.04 at 1:52 pm

One can’t toss out all metonymy in modeling — descriptive is explanatory, and domain shifting in this case is just a variety of Peircean abduction. But I must say that I rather like the notion of turning the concept of marginal utility upon itself.

11

Brian Weatherson 03.03.04 at 2:15 pm

Bob, I agree the argument

bq. X is irrelevant to phenomena Y,
therefore, X is irrelevant to phenomena Z

is better when Y and Z are closely related. For instance, when Y and Z are both about collisions between gas particles and walls of a container. Or, perhaps, when Y and Z are both about particular features of one market.

But I don’t think Friedman is particularly careful at keeping to that limitation. It’s rather something like X is irrelevant to lots of things, so it will probably be irrelevant to the next thing I come across. And that strikes me as very bad, unless we have independent reason to think X is non-existent. Since in this case X is ‘departures from perfect maximising behaviour’, that option is ruled out.

John, I agree Friedman’s discussion isn’t a very good justification for actual Chicago school practices. I had originally read it as an apologia for that kind of practice, and hence been tempted to dismiss it. What surprised me on rereading it was that it seemed to be (a) a somewhat plausible methodology and (b) one that didn’t function at all well as such an apologia.

12

JH Bogart 03.03.04 at 3:07 pm

Nancy Cartwright has extended and useful discussions of these issues in her work on physics, much better than Friedman.

13

Anno-nymous 03.03.04 at 3:42 pm

Brian said, “Carlos, right I should have said don’t collide with each other. Of course collisions with the container walls are very important in the model. My mistake.”

If I remember correctly from my PSAT booklet, you should have said “don’t collide with one another.”

It’s probably just a whoopsie on carlos’ part, and doesn’t change the thrust of his argument, as far as I can tell. But there was definitely a “yeesh” on my part when I read that sentence.

14

Jonathan wilde 03.03.04 at 4:40 pm

Although I agree with much of your criticism of economic models, I have a couple of disagreements on the analogy to Boyle’s Law:

– Boyle’s Law does not assume that no collisions occur between the gas particles. Rather, it assumes that

  • there are a very large number of particles in constant rapid motion that are very far apart relative to their size
  • that there are no forces of attraction or repulsion between the particles
  • that when collisions between particles occur, they are elastic collsions

– Although Boyle’s Law is pretty much useless for calculating actual numbers, i.e., “What is the volume of a gas when its pressure drops from X to Y?”, its strength is its demonstration that as the pressure of the gas increases, its volume must decrease, and vice versa.

– Thus, your statement of Boyle’s Law as “for a constant quantity of gas at constant temperature, pressure times volume is roughly constant” supports your point of the weakness of models, but if you restate it as “for a constant quantity of gas at constant temperature, volume and pressure are inversely related” provides much better support for the usefulness of Boyle’s Law as a model of general demonstration rather than accurate prediction.

15

magpie mackerel 03.03.04 at 5:20 pm

Thanks for the illuminating post, Mr. Weatherson.

There are some terrific teachers on this site.

very gratefully,

Maggie

16

magpie mackerel 03.03.04 at 5:20 pm

Thanks for the illuminating post, Mr. Weatherson.

There are some terrific teachers on this site.

very gratefully,

Maggie

17

Carlos 03.03.04 at 5:24 pm

Hey Brian (or Henry Farrell),

Could you send me the Message-ID and other header arcana for this ‘anonymous’ fellow’s comments? I’m wondering if it’s an old stalker or a new one.

TIA,
Carlos

18

Doug Turnbull 03.03.04 at 5:56 pm

The basic idea that models can’t include every factor, but this doesn’t mean they’re wrong, seems to me to be pretty well known and understood. You’d be hard pressed to find a physics class that didn’t use this fact at some point. It’s also why Taylor series expansions are so important.

From a totally different area, I spent last week taking a class discussing computer modeling, and the teacher enjoyed repeating a famous quote, “All models are wrong. Some are useful.” Which is a pithy way of saying the same thing.

19

pw 03.03.04 at 6:04 pm

And here I’d always thought that the problem with assuming that people optimize utility was that no one knows (except by convoluted inference) what a utility function looks like, so that optimization just becomes a tautology. That Friedman uses discrimination in employment as an example is, of course, particularly telling because both employers and customers are known a priori to have messy utility functions in that area.

The ideal-gas analogy is similarly telling, because the fundamental assumption that makes boyle’s law work (that collisions between gas particles can be disregarded because they are elastic and isotropic) is one whose equivalent many economists would desperately like to be true in their field. The tendency to assume frictionless transactions (as opposed, say, to the experimental evidence that market institutions can make a significant difference in clearing prices) shows not merely that there is a preference for theory over experiment — which may be OK — but that there’s a preference for a particular, tractable kind of theory over experiment.

20

Antoni Jaume 03.03.04 at 7:22 pm

Carlos, allow me to disagree on your understanding of collision. You see, you and I collide in our interpretation. So when Brian report “[…]gas molecules never collide. Now this is clearly an inaccurate model, since gas molecules collide all the time,[…]”, he is correct. Yes, you can also say that “molecules collide with other molecules” but it is redundant. Remember that collide came from “com + laedere”, “strike together”, so it can be used just like converse, convene and some other that imply a plural subject.

DSW

21

anno-nymous 03.03.04 at 7:53 pm

Hey Brian (or Henry Farrell),
Could you send me the Message-ID and other header arcana for this ‘anonymous’
[sic] fellow’s comments? I’m wondering if it’s an old stalker or a new one.
TIA,
Carlos

Hey Brian (or Henry Farrell) — no need. I’m pretty sure I’m a “new one”.

22

Kenny Easwaran 03.04.04 at 12:26 am

Bob – I had a similar thought to the one you start off with, that departures from idealization can be considered irrelevant when we have an antecedent explanation for what circumstances will render them irrelevant. I was thinking that this pointed up a major difference between the kinetic theory of gases and the rational optimization theory of economics – in the first case we have fairly well-supported theories explaining the composition of matter out of molecules, and explaining the interactive forces between molecules. Thus, it is in principle possible to perform the calculations and show that certain terms representing interaction and intermolecular collision approximately drop out. In the economic case however, I’m not aware of any more detailed theory that explains why assumptions that economic agents are rational optimizers are good enough in the certain cases in which they are. Of course, this just means that we’re at a similar point in explaining economic behavior as Boyle was in explaining gaseous behavior, except that Boyle fortuitously had a model that worked better to predict the phenomena.

Also, one other point I didn’t understand in your comment – it seems to me that the diffusion of gases depends on the frequency of hits of gas against the boundary without additional evidence. That is, on the naive view I have right now, it seems that diffusion of gases depends merely on the frequency (and force) with which molecules pass through the boundary of their original location, just as pressure depends merely on the frequency (and force) with which they hit the boundary. I suppose in the diffusion case the boundary is constantly changing, but that doesn’t seem obviously relevant.

23

Jack 03.04.04 at 3:33 am

An echo of the poster who said “All models are wrong. Some are useful.”

The idea that people persue maxmimal transactions is a usually a good guess, but it isn’t always correct becuase their information is never perfect. As others pointed out with the gas example it works when there are a lot of people and you can average what they do over a long time.

I’d compare it to stereotypes, they are useful right up until they aren’t. If you see a thuggish guy on a dark street at night you’ll cross the street, but if the same guy applies for a job at your company you change your model (here’s a thuggish guy looking for honest work, +1).

As a personal example, I consider anyong walking down the street with a newspaper harmless (not the leaflet sized freebies). If you’ve seen a police blotter with the description “6’1, reading the Times” I might change my tune, but until then …

24

Ric Locke 03.04.04 at 3:51 am

Kenny:

“… it seems that diffusion of gases depends merely on the frequency (and force) with which molecules pass through the boundary of their original location…”

No, you’re describing expansion into a vacuum. In diffusion, there are already molecules (of something else) there, and the whole thing gets a lot messier.

Regards,
Ric Locke

25

Sid 03.04.04 at 5:25 am

What a brilliant discussion you’ve got going on over here – never have I encountered people so intelligent (and interesting, to boot!).

All praise aside, just something I’d like to add to Mr. Jonathan Wilde’s post: he mentions that

“- Although Boyle’s Law is pretty much useless for calculating actual numbers, i.e., “What is the volume of a gas when its pressure drops from X to Y?”, its strength is its demonstration that as the pressure of the gas increases, its volume must decrease, and vice versa.”

From what I know, Boyle’s Law was one of the (three?) gas laws that led to the formulation of the ideal gas equation, PV=NRT, which gives a substantial amount of accuracy when it comes to quantifying changes in P or V (although it does make some assumptions about gases being ‘ideal’.)

26

Jonathan Wilde 03.04.04 at 2:56 pm

sid,

From what I know, Boyle’s Law was one of the (three?) gas laws that led to the formulation of the ideal gas equation, PV=NRT, which gives a substantial amount of accuracy when it comes to quantifying changes in P or V (although it does make some assumptions about gases being ‘ideal’.)

Yes, Boyle’s Law, Charles’s Law, and Gay-Lussac’s Law combine to form the Ideal Gas Equation. However, the numbers given by plugging into the equation are wildly inaccurate at conditions outside a small range. At high pressures, a calculation can be off by orders of magnitude. People have tried to come up with different, more accurate models using various “equations of state” and graphical methods, with varying accuracy.

The Ideal Gas Equation is not something I would rely on to make any calculations in industrial processes. At best, it provides a simple illustration of relationships between variables, i.e., that volume is inversely related to pressure, that pressure is directly related to the number of moles present, etc.

27

Roger Sweeny 03.04.04 at 7:43 pm

Or as economist Dierdre McCloskey said, “All models are metaphors, and all metaphors are lies.”

28

bill carone 03.04.04 at 8:27 pm

Brian,

“The fact that people aren’t perfect maximisers is irrelevant to (say) the probability that various options will be exercised.

Therefore, the fact that people aren’t perfect maximisers is irrelevant to (say) how much discrimination there is in various job markets.

And this doesn’t even look like a good argument.”

If we call the first sentence A and the second sentence B, you (sensibly, I think) say that A does not imply B.

But don’t you agree that

p(B | A) > p(B)

In other words, if the maximizing assumption works in one economic situation, doesn’t that increase the probability that it will work in another economic situation?

If so, since we don’t just have one example, but many examples of the “non-maximizing fact” being irrelevant, then it is highly probable that it will be irrelevant next time.

So I disagree with your statement:

“every single prediction must be tested anew, because these models have little or no evidential value on their own.”

since simplified models can have quite a bit of “evidential value on their own” if they have worked well (or well enough) many times in the past, right?

Also,

“It’s rather something like X is irrelevant to lots of things, so it will probably be irrelevant to the next thing I come across. And that strikes me as very bad…”

The fact that it was irrelevant in past situations _does_ provide evidence that it will be irrelevant in the future. It isn’t perfect; you can make logical or scientific arguments why it will be very relevant in the next case, but you do need to make those arguments. Otherwise it seems quite sensible to assume irrelevance.

29

Brian Weatherson 03.04.04 at 9:36 pm

Bill, I don’t think in this case P(B|A) > P(B). I think what’s happening is the following. Let C be the hypothesis that for *all* intents and purposes, people are ideally rational. I think A ‘boosts’ the probability of C, and that in turn might boost the probability of B. The problem is that we’ve got independent evidence that the probability of C is close enough to 0 that it’s not worth bothering about, and that screens off the evidentiary value of A to B.

By the way, I don’t say models don’t have predictive value. A good model (like say Akerlof’s model for used cars) might be well-confirmed and quite reliable. I just don’t think it alone provides any reason to trust *other* models.

30

bill carone 03.04.04 at 10:32 pm

Brian,

“Bill, I don’t think in this case P(B|A) > P(B). I think what’s happening is the following. Let C be the hypothesis that for all intents and purposes, people are ideally rational…”

I was thinking along these lines, but not using C but C’: “Even though people are actually not ideally rational, this fact is usually irrelevant.”?

I’d would want to know your evidence that p(C’) = 0; it is different than the evidence against C, which is easy to demolish (one good counterexample would do it, right?)

“I don’t say models don’t have predictive value. A good model (like say Akerlof’s model for used cars) might be well-confirmed and quite reliable. I just don’t think it alone provides any reason to trust other models.”

Right, and I am arguing that confirmation of one model does provide some reason to trust other models.

It boosts the probability of C’, which boosts the probability that the “non-maximizing” fact is irrelevant in the next situation.

It doesn’t provide the same level of support as a well-confirmed scientific study, but it does provide some support, and lots of examples provide lots of support.

For example, after many economic situations have been studied, I might have quite a bit of evidence for something like the following “For an economic situation, before we do a scientific study, you should assign an 85% chance that the rationality assumption will give a good-enough answer.” (this is a more precise version of C’ above). Can you imagine having lots of evidence for this proposition? Or perhaps I am wrong here?

Then, when I make my next decision, I should assign an 85% chance to the rationality assumption’s working, and a 15% chance to its failing. Then I can decide, based on the consequences of being right and wrong, whether to simply use the rationality assumption or to spend time and money doing a real study that gets the real answer.

31

Brian Weatherson 03.05.04 at 5:59 pm

Bill, there’s a few reasons why I don’t think C’ will be helpful.

First, we’d need a lot more examples (I mean a *lot* more examples) of successful models before we’d say that 85% of cases can be modelled without accounting for departures from full maximisation.

Second, if we know 85% of cases can be so modelled, and we know which 85% they are, that doesn’t tell us a lot about the other 15%.

Third, even if we know 85% of cases can be so modelled, if we know something about the other 15%, that extra knowledge will probably be more helpful. Again, look at the example Friedman provides. Is it really plausible to infer that because irrational behaviour washes out in financial markets, it also washes out when it comes to racial or religious discrimination?

Finally, the cases being covered, especially when we get into Chicago school style extensions of economics, just don’t look like natural kinds. And induction can only be applied to natural kinds unless we want to fall into paradox.

32

bill carone 03.05.04 at 9:33 pm

Brian,

Forgive me, but I don’t see how your first three points are relevant. I am arguing that if the rationality-assumption works well-enough in one model, that provides evidence that such an assumption will work in the next model. You are arguing the reverse. Correct?

“First, we’d need a lot more examples (I mean a lot more examples) of successful models before we’d say that 85% of cases can be modelled without accounting for departures from full maximisation.”

True (although I think you and I have different views on how many “success” stories there are), but irrelevant; a new success would still move us towards the 100% point, while a new failure would move us towards the 0% point. Therefore, a new success increases our probability for the next success, and a new failure decreases it.

Imagine a set of propositions, C0 to C100, where

Cx = “Before we study an economic situation, we should assign x% chance that the rationality assumption will lead to good-enough predictions about the situation.”

We can then assign probabilities on each Cx. Each time a model based on rationality is confirmed, the probabilities of low Cx decrease and the probabilities of high Cx increase. Therefore, the probability that rationality will work in the next situation increases. (The inverse is also true; failures lead to lower probabilities).

“Second, if we know 85% of cases can be so modelled, and we know which 85% they are, that doesn’t tell us a lot about the other 15%.”

True but irrelevant. All I am saying is that we increase our probability of rationality working for the next case, nothing about what we should do if we find that rationality doesn’t work.

“Third, even if we know 85% of cases can be so modelled, if we know something about the other 15%, that extra knowledge will probably be more helpful.”

Again, true but irrelevant. The idea is that rationality is a _good-enough_ assumption, not that other assumptions can’t work marginally better.

What assumptions _should_ we use in our models? It depends on what we are using the models for, the costs of using different models, and the consequences of being right or wrong. Better modelling can lead to more precise results, but might not be worth it.

“Is it really plausible to infer that because irrational behaviour washes out in financial markets, it also washes out when it comes to racial or religious discrimination? ”

I have not been arguing about “washing out.” I have been arguing about “doesn’t make enough of a difference to worry about.” The first implies the second, but the second doesn’t imply the first.

As to the question, if we are discussing a model that predicts what people will do, then yes, it makes sense to me that if we show that irrationality can be ignored in one case, it provides evidence (not certainty) that it can be ignored in other cases.

You might argue that the reasons that irrationality wasn’t important in one case don’t apply in another particular case; however, you need to make those arguments. Without them, the success of one case increases the chance of success in the next case.

“Finally, the cases being covered, especially when we get into Chicago school style extensions of economics, just don’t look like natural kinds.”

Economics predicts human action. One assumption that helps model human action is rationality. By observing how closely reality matches the models, we can see what predictive power the models have.

I don’t see this as needing the philosophical heavy lifting of “natural kinds”. However, I am not a philosopher.

“And induction can only be applied to natural kinds unless we want to fall into paradox.”

How would you apply this to our current discussion? Can you produce a paradox?

Comments on this entry are closed.