Bookblogging: Implications of micro-based macro

by John Q on October 20, 2009

Another section from my book-in-progress. The book-so-far can be viewed here.

Refuted doctrines


The implications of the micro-foundations approach to macroeconomics can be assessed in the light of the introduction to Paul Krugman’s essay ‘How Did Economists Get it So Wrong’.

It’s hard to believe now, but not long ago economists were congratulating themselves over the success of their field. Those successes — or so they believed — were both theoretical and practical, leading to a golden era for the profession. On the theoretical side, they thought that they had resolved their internal disputes. Thus, in a 2008 paper titled “The State of Macro” (that is, macroeconomics, the study of big-picture issues like recessions), Olivier Blanchard of M.I.T., now the chief economist at the International Monetary Fund, declared that “the state of macro is good.” The battles of yesteryear, he said, were over, and there had been a “broad convergence of vision.” And in the real world, economists believed they had things under control: the “central problem of depression-prevention has been solved,” declared Robert Lucas of the University of Chicago in his 2003 presidential address to the American Economic Association.

These conclusions did not emerge as specific implications of any particular model. Rather, the micro-foundations approach, at least in its current form, can only work well under specific assumptions and conditions. The crucial assumptions are that the standard microeconomic model in which market outcomes are driven by the optimizing decisions of rational individuals (in typical macroeconomic models, those of a single rational individual).


Rationality everywhere

The incorporation of rational expectations into micro-based macroeconomic models went hand in hand with the acceptance of increasingly strong forms of the efficient markets hypothesis, and both fitted naturally with the rise of market liberalism. In competitive markets where participants are perfectly rational and display high levels of foresight, it is very hard to see any beneficial role for governments. Even if governments happen to better informed than market participants, they should not, in a world of perfect rationality, act on that information. Rather, they should release the information to the public, allowing market participants to combine this public information with their own private information, and secure better outcomes than would be possible from government action.

Of course, many macroeconomists, and particularly those of the New Keynesian school, explicitly rejected the ultra-rational assumptions that produced such implausible conclusions as Barro’s Ricardian equivalence. One of the standard moves in the construction of Blanchard’s haikus was to allow the ‘representative individual’ to deviate in some small way from perfect rationality.

A common example is the assumption of ‘hyperbolic’ discounting. The idea is that in assessing a choice between getting some benefit immediately, or at some point in the relatively near future, say, in a month’s time, people display a lot of impatience. They are willing to offer a big discount to get the benefit now rather than wait to get something better. But, if they are asked about two points in the future that are a month apart, they will offer only a small benefit. Such preferences, if maintained over time, are not consistent with standard rationality. The choices people make now regarding choices in the medium future are not the same as they would make if they waited until the opportunity for immediate consumption was actually available. A paper by Liam Graham and Dennis Snower showed that the combination of staggered nominal contracts with hyperbolic discounting leads to inflation having significant long-run effects on real variables, that is, to the existence of a Phillips curve relationship that might persist into the long term.

Papers in this tradition showed that small deviations from rationality can sometimes have big effects on economic outcomes. But they rarely have big implications for public policy. Rather, they point in the direction of the idea set out by Cass Sunstein and Richard Thaler in their recent book Nudge. Sunstein and Thaler argue that governments can sometimes exploit deviations from rationality by framing choices that will ‘nudge’ people’s decisions in a socially desirable direction. George Lakoff in Don’t Think of An Elephant makes the same argument in a political context, suggesting that the Republican Party has had more success than would be expected based on underlying support for its policies, because it has done a better job of ‘framing’ political issues. Rather than seeking a more rational debate, Lakoff, argues, Democrats should respond in kind.


Fiscal and monetary policy

The theoretical complacency with which the DGSE school viewed the state of macroeconomic theory was matched by a similar complacency regarding macroeconomic policy. From the early 1990s to the panic of 2008, macroeconomic policy was, for all practical purposes, monetary policy, or, more precisely interest rate policy. The standard approach involved what is called a Taylor rule, after … economist John Taylor, later the Under Secretary of the US Treasury for International Affairs under the George W. Bush|Bush Administration, who proposed in 1993. Taylor presented his rule as a way of describing the actual behavior of central banks, but it soon came to be used as a normative guide to policy.

The idea of the Taylor rule was to set interest rates in such a way as to keep two variables, the inflation rate and the rate of growth of Gross Domestic Product, as close as possible to their target values. Typical targets might be an inflation rate of 2 to 3 per cent, and a real GDP growth rate in line with long-term growth in the labour force and labour productivity, say 3 per cent for a developed country like the US. Given

Within this framework, the essential functions of macroeconomic theory are relatively simple. Complex macroeconomic models can be reduced to simple relationships between one policy instrument (interest rates) and two targets (inflation and growth). Since there are two target variables, it’s impossible to hit each target exactly, so the models give rise to a trade-off. Using the single representative agent who typically inhabits a DGSE model, it’s possible to calculate the optimal trade-off, which can be expressed as the range of acceptable variation in inflation rates.

During the Great Moderation, all this seemed to work very well, to the extent that commentators spoke of a ‘Goldilocks economy’, neither too hot, nor too cold but just right. Even with a tight target range for inflation, between 2 and 3 per cent per year, it seemed possible to stabilise growth and avoid all but the mildest recessions. In these circumstances, the comment of Robert Lucas that the “central problem of depression-prevention has been solved,” seemed only reasonable.



Ceri B. 10.21.09 at 3:56 am

I like the additional details in examples – I agree with PGD in an earlier bookblogging thread about that. What’s a Phillips curve relationship?


Alex 10.21.09 at 2:24 pm

I think John covered it in the instalment about the postwar Keynesian consensus…but here goes.

A.W. Phillips, a New Zealand statistician who had built a hydraulic computer to simulate the economy, observed in 1958 that there seemed to be a stable empirical relationship between unemployment and inflation, graphed the data for the late 19th century, estimated the equation, and then found that it forecast, or more accurately hindcast, the actual values for 1913-1950 very well.

Further, the curve he derived predicted that a moderate inflation rate would be consistent with full employment. This is the Phillips curve – it suggests that there is a useful tradeoff between unemployment and inflation, and it became a standard tool of economic policy in a similar way to the Taylor rule much later. If inflation starts to rise, this implies you’re at the far end of the curve, where it steepens fast as the economy reaches the long-run supply constraint; you need to deflate. If unemployment is rising, there is underutilised capacity; you need to reflate.

In the 1970s, the relationship seemed to break down, or alternatively, the curve shifted dramatically towards the top right hand corner of the chart. This ushered in the rational expectations theorists, monetarism, new Keynesians, etc etc.

Oddly enough, the Taylor rule bears more than a passing similarity to a sort of Phillips curve for central bankers rather than finance ministries.


Ceri B. 10.21.09 at 2:58 pm



Chris 10.21.09 at 3:57 pm

They are willing to offer a big discount to get the benefit now rather than wait to get something better. But, if they are asked about two points in the future that are a month apart, they will offer only a small benefit.

Doesn’t this reflect the fact that many risks of future noncollectability are unrelated to the time span, or related to it in ways more complex than a simple exponential?

You might find out later that the person you’re talking to today is a scammer, not really associated with the organization he claims. Or the organization itself might renege on the promise (and you probably have no enforceable contract, let alone one that is economically practical to enforce with the small prize amounts offered in these studies; do they even offer to give the subject written proof of entitlement to the future amount?) Or you might be unable to locate them.

The risk that the organization will be bankrupt, disbanded etc. *is* related to the time lapse, but not necessarily as a simple exponential. An organization that has lasted a year is highly likely to last another week, but you can’t have nearly the same confidence about an organization that is only a week old. (Many businesses advertise their age for precisely this reason. I wonder if associating with an organization with a high reputation for stability, and pointing this out to the subjects, would alter the results of a future-discounting experiment?)

Even aside from those risks, any prospect of future collection imposes a cost of collection that you must pay in the future to receive the promised payment, which is not present if the experimenter just hands you the cash now. So how much is a promise to be paid $100 in a year *really* worth? I think the answer to that question doesn’t just depend on expected inflation.

These factors seem to me more than adequate to rationally explain the specially favored status of cash now, because unlike the experimenter, the subject does not simplify his model by assuming they don’t exist. Behavior that depends on the assumption that you don’t live in an idealized frictionless economy with no fraud or transaction costs isn’t irrational if you really *don’t* live in an idealized economy.


Patrick E. 10.22.09 at 6:45 am

One of the things that I’ve been wondering about in your discussion of micro-based macro discussions is the role the Sonnenschein-Mantel-Debreu Theorem plays in all of this. From what I understand of it, the whole micro-based macro project should have been dead in the water from day one, but I was wondering if I’m just misunderstanding its implications.


JoB 10.22.09 at 7:54 am

Hey, I was expecting Tversky & Kahneman and I get haikus & Lakoff??? &, to my utter surprise, wikipedia entry for T&K does not refer to ‘bounded rationality’ and wikipedia entry for ‘bounded rationality does not refer to T&K!

Is the more serious quantitative (micro?-)stuff on bounded rationality still to come, or are you going to stick with the semi-poetic (macro?-)stuff?

There should be some economics stuff based on T&K, isn’t there?


Kenny Easwaran 10.22.09 at 11:55 am

I also didn’t think that the Lakoff fit in so nicely. I can see the relevance of the idea, but the way it’s presented here it feels a bit more like it’s being shoe-horned in.

I guess the ideas of Sunstein and Lakoff do have the following relevant similarity. Traditionally, we assume that people are fully rational. Under this assumption, giving them more choices and more information will generally lead them to make the decisions that are best (at least with respect to their own interests, even if not with respect to societal interests). However, following the work of Tversky and Kahneman (and many other psychologists, and I suppose some cognitive linguists and behavioral economists) we know that people aren’t rational in the relevant way. By understanding the ways in which they fail to be rational, we can present choices or information to them in ways that maximize the likelihood that people will end up actually acting in ways that are best (with respect to their own, or societal, or our, interests).

To present the ideas in the way that makes them seem properly parallel, like that, it seems to me that you really end up focusing on the ways that they manipulate people’s irrationality. It doesn’t quite seem that this is the way the Keynesian wants to think about it though. Rather, the Keynesian seems to want to figure out what the structure of equilibria is under the assumption that people behave in the more descriptively accurate irrational way, and then figure out how interventions interact with the structural features. With Sunstein and Lakoff, it seems to be more about intervening on individuals, still treated as individuals, which ends up feeling more manipulative or paternalistic. With the Keynesian, it sounds like the idea is almost to ignore the individuals and intervene on the macro structure directly. We may still be manipulating the individuals, but we aren’t doing so in a way that intends to manipulate their individual decisions directly – instead we only care about the individual decisions because of the role they play in creating the larger economic structures.


John Quiggin 10.22.09 at 12:37 pm

Ceri B, this is a plausible explanation of some kinds of hyperbolic discounting, though there are others that are more difficult to reconcile with rationality, such as carrying credit card balances.

JoB, I certainly plan more on bounded rationality, and I certainly should cover T&K. There will be some of this in the “What next” section of this chapter (another hard bit) and I should go back to the EMH chapter and put a bit more in there.

Kenny, you got the point, even if it did seem a bit shoehorned. Any suggestions to fit it more neatly, or should I cut the reference to Lakoff – it was only an aside.


JoB 10.22.09 at 8:02 pm

Hey John, sorry to offer my unsollicited opinion again: I’d kick Lakoff & you can keep the quick referenc example to ‘framing’. Enough studies in the more quantitative stuff on how people are influenced by how ‘risk’ is framed (probably some that are political and in fact: framing the upside of risk, American Dream and so forth, is pretty close to the original).

Kenny, I don’t see how you can educate human individual agents out of their bounded rationality. The preferences are real even if not rational; the only education is to get a critical attitude, and, on the macro-level to ensure that in society there is room for all things to be critically looked upon. Other than that, people will continue to gamble, in the full knowledge that they will, on the average, loose.


John Quiggin 10.22.09 at 11:16 pm

You can’t educate people ‘out of’ bounded rationality because even the most sophisticated of us is still finite and therefore boundedly rational. Any coherent explanation of the recent financial meltdown must incorporate a large element of bounded rationality, both individual and in the collective behavior of financial institutions that are designed with the aim of being more rational and sophisticated than any individual.

That said, it is possible to learn (or be taught) to avoid particular errors in reasoning, especially if others are trying to exploit those errors. And, as Kenny says, it is possible to present information in ways that make it easier or harder for people to reach the ‘right’ decision; roughly the one they would choose if they were unboundedly rational and at least moderately altruistic/socially concerned.


Kenny Easwaran 10.23.09 at 4:24 am

I would say that unless you want to go into a more detailed discussion, it’s probably best to drop the Lakoff. I can definitely see the advantage of keeping the reference, in that lots of your target audience will have heard vague praise of his ideas from the lefty blogosphere over the last couple years. But it’s probably not worth including just as a name-dropping device unless you clarify these ideas, because I suspect most of the target audience will have heard the word “framing” but not actually have any idea of the extent to which Lakoff is critiquing the very notion of rational debate.


JoB 10.23.09 at 2:59 pm

John, yes, but I don’t think we should see ‘bounded rationality’ as inferior rationality. I am sure people need to be critically conscious of it but not to the extent of shuning it in principle. As said, there’s nothing in principle wrong with liking to gamble althought we would not do it following pure rationality (I mean the every day gambling). The issue I would have with keeping Lakoff is that he has things backward in a foggy kind of way – whilst the originals had it forward in the clearest possible way.

But let’s hop on to your next snippet, shall we ;-)


JoB 10.23.09 at 3:01 pm

Oops, moderation! Unless it’s catching bad English and mis-spelling now, I wouldn’t be able to tell why it (12) is being caught.


John Emerson 10.23.09 at 3:44 pm

JoB: Soci*lism? Crooked Timber is death on soci*lism.


JoB 10.23.09 at 4:14 pm

Nope, not that ‘un. Gambling maybe?


JoB 10.23.09 at 4:15 pm

Yup, g*mbling is out! Gamble on the other hand is good to go.


John Cisternino 10.23.09 at 5:54 pm

Papers in this tradition showed that small deviations from rationality can sometimes have big effects on economic outcomes. But they rarely have big implications for public policy. Rather, they point in the direction of the idea set out by Cass Sunstein and Richard Thaler in their recent book Nudge. Sunstein and Thaler argue that governments can sometimes exploit deviations from rationality by framing choices that will ‘nudge’ people’s decisions in a socially desirable direction.

Professor Quiggin, I’m curious if you can expand on the half paragraph about behavioral economics. I’m largely in agreement with your evaluation that recognition of bounded rationality has had only limited impact on public policy decisions, but it might be quite clarifying if you said a bit more about the limits of the “Nudge” approach.

FWIW, it’s not clear to me that behaviorally informed public policy must always be small and aimed at consumers. Once the idea is taken seriously, it can have quite serious regulatory implications – if in a certain sector bounded rationality makes it systematically more profitable for firms to exploit consumers than to satisfy their long term (or subjectively endorsed) preferences, then there’s a prima facie case for significant regulatory reshaping of those sectors of the market. Some of the proposals for the new Consumer Financial Protection Agency in the United States seem headed in this direction.

For a recent discussion of these broader implications of bounded rationality, see Michael S. Barr, Sendhil Mullainathan, and Eldar Shafir, “The Case For Behaviorally Informed Regulation,” at (full disclosure: I co-edited the volume that this paper appeared in, “New Perspectives on Regulation”).


John Quiggin 10.23.09 at 7:52 pm

@12 I absolutely agree. Bounded rationality is the only kind we have, so we’d better not shun it. And a fair bit of my own work has been to show that g*mbling, at least the lottery kind, can be rational (or maybe reasonable is a better word in this context).

@17 I agree with you regarding regulatory implications, what I meant to say was that it seemed unlikely to me that you would derive support for large-scale macro intervention, like the recent stimulus, from a standard GE model with a little bit of bounded rationality bolted on.


JoB 10.23.09 at 8:17 pm


My preferred term would be: human ;-)

Comments on this entry are closed.