More bookblogging! It’s all economics here at CT these days, but normal programming will doubtless resume soon.
Most of what I’ve written in the book so far has been pretty easy. I’ve never believed the Efficient Markets Hypothesis or New Classical Macro and it’s easy enough to point out how the occurrence of a massive financial crisis leading to a prolonged macroeconomic crisis discredits them both.
I’m coming now to one of the most challenging section of my book, where I look at why the New Keynesian program (with which I have a lot of sympathy) and ask why New Keynesians (most obviously Ben Bernanke) didn’t, for the most part, see the crisis coming or offer much in response that would have been new to Keynes himself. Within the broad Keynesian camp, the people who foresaw some sort of crisis were the old-fashioned types, most notably Nouriel Roubini (and much less notably, me) who were concerned about trade imbalances, inadequate savings, and hypertrophic growth of the financial sector. Even this group didn’t foresee the way the crisis would actually develop, but that, I think is asking too much – every crisis is different.
My answer, broadly speaking is that the New Keynesians had plenty of useful insights but that the conventions of micro-based macroeconomics prevented them from forming the basis of a progressive research program.
Comments will be appreciated even more than usual. I really want to get this right, or as close as possible
New Keynesian macroeconomics
In the wake of their intellectual and political defeats in the 1970s, mainstream Keynesian economists conceded both the long-run validity of Friedman’s critique of the Phillips curve, and the need, as argued by Lucas, for rigorous microeconomic foundations. “New Keynesian economics” was their response to the demand, from monetarist and new classical critics, for the provision of a microeconomic foundation for Keynesian macroeconomics.
The research task was seen as one of identifying minimal deviations from the standard microeconomic assumptions which yield Keynesian macroeconomic conclusions, such as the possibility of significant welfare benefits from macroeconomic stabilization. A classic example was the ‘menu costs’ argument produced by George Akerlof, another Nobel Prize winner. Akerlof sought to motivate the wage and price “stickiness” that characterised new Keynesian models by arguing that, under conditions of imperfect competition, firms might gain relatively little from adjusting their prices even though the economy as a whole would benefit substantially.
The approach was applied, with some success, to a range of problems that had previously not been modelled formally, including many of the phenomena observed in the leadup to the global financial crisis, such as asset price bubbles and financial instability generated by speculative ‘noise trading’.
A particularly important contribution was the idea of the financial accelerator, a rigorous version of ideas first put forward by Fisher and by Keynesians such as Harrod and Hicks. Fisher had shown how declining prices could increase the real value of debt, making previously profitable enterprises insolvent, and thereby exacerbating initial shocks. The Keynesians showed how a shock to demand would result in declining utilisation, meaning that firms could meet their production requirements without any additional investment. Thus the initial shock to demand would have an amplified effect on the demand for investment goods.
In a 1989 paper, Ben Bernanke and Mark Gertler integrated these ideas with developments in the theory of asymmetric information to produce a rigorous model of the financial accelerator.
It would seem, then, that New Keynesian economists should have been well equipped to challenge the triumphalism that prevailed during the Great Moderation. With the explosion in financial sector activity, the development of massive international and domestic imbalances and the near-miss of the dotcom boom and slump as evidence, New Keynesian analysis should surely have suggested that the global and US economies were in a perilous state.
Yet with few exceptions, New Keynesians went along with the prevailing mood of optimism. Most strikingly, the leading New Keynesian, Ben Bernanke became, the anointed heir of the libertarian Alan Greenspan as Chairman of the US Federal Reserve. And as we have already seen, it was Bernanke who did more than anyone else to popularise the idea of the Great Moderation.
Olivier Blanchard summarises the standard New Keynesian approach (which converged, over time with the RBC approach) using the following, literally poetic, metaphor
A macroeconomic article today often follows strict, haiku-like, rules: It starts from a general equilibrium structure, in which individuals maximize the expected present value of utility, ¯rms maximize their value, and markets clear. Then, it introduces a twist, be it an imperfection or the closing of a particular set of markets, and works out the general equilibrium implications. It then performs a numerical simulation, based on calibration, showing that the model performs well. It ends with a welfare assessment.
Blanchard’s description brings out the central role of microeconomic foundations in the New Keynesian framework, and illustrates both the strengths and the weaknesses of the approach. One the one hand, as we have seen, New Keynesians were able to model a wide range of economic phenomena, such as bubbles and …, while remaining within the classical general equilibrium framework. On the other hand, precisely because the analysis remained within the general equilibrium framework, it did not allow for the possibility of a breakdown of classical equilibrium, which was precisely the possibility Keynes had sought to capture in his general theory.
The requirement to stay within a step or two of the standard general equilibrium solution yielded obvious benefits in terms of tractability. Since the properties of general equilibrium solutions have been analysed in detail for decades, modeling “general equilibrium with a twist” is a problem of exactly the right degree of difficulty for academic economists – hard enough to require, and exhibit, the skills valued by the profession, but not so hard as to make the problem insoluble, or soluble only with the abandonment of the underlying framework of individual maximization.
A critical implication of Blanchard’s haiku metaphor is that the New Keynesian program was not truly progressive. A study of some new problem such as the incentive effects of executive pay would typically, as Blanchard indicates, begin with the standard general equilibrium model, disregarding the modifications made to that model in previous work examining other ways in which the real economy deviated from the modelled ideal. The cumulative approach would imply a model that moved steadily further and further away from the standard GE framework, and therefore became less and less amenable to the standard techniques of analysis associated with that model.
This, I think, is what Paul Krugman had in mind when he suggested in his essay ‘How Did Economists Get It So Wrong?’ that economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth. The work described by Blanchard was beautiful (at least to economists) and illuminated some aspects of the truth, but beauty came first. An approach based on putting truth first would have incorporated multiple deviations from the standard general equilibrium model then attempted to work out how they fitted together. In many cases, the only way of doing this would probably be to incorporate ad hoc descriptions of aggregate relationships that fitted observed outcomes, even if it could not be related directly to individual optimization.
New Keynesian macroeconomics, of the kind described by Blanchard, was ideally suited to the theoretical, ideological and policy needs of the Great Moderation. On the one hand, and unlike New Classical theory it justified a significant role for monetary policy, a conclusion in line with the actual policy practice of the period. On the other hand, by remaining within the general equilibrium framework the New Keynesian school implicitly supported the central empirical inference drawn from the observed decline in volatility, namely that major macroeconomic fluctuations were a thing of the past.
Eventually, the New Keynesian and RBC streams of micro-based macroeconomics began to merge. The repeated empirical failures of standard RBC models led many users of the empirical techniques pioneered by Prescott and Lucas to incorporate non-classical features like monopoly and information asymmetries. These “RBC-lite” economists sought, like the purists, to produce calibrated dynamic models that matched the “stylised facts” of observed business cycles, but quietly abandoned the goal of explaining recessions and depressions as optimal adjustments to (largely hypothetical) technological shocks.
This stream of RBC literature converged with New Keynesianism, which also uses non-classical tweaks to standard general equilibrium assumptions with the aim of fitting the macro data.
The resulting merger produced a common approach with the unwieldy title of Dynamic Stochastic General Equilibrium (DSGE) Modelling. Although there are a variety of DSGE models, they share some family features. As the “General Equilbrium” part of the name indicates, they take as their starting point the general equilibrium models developed in the 1950s, by Kenneth Arrow and Gerard Debreu, which showed how an equilibrium set of prices could be derived from the interaction of households, rationally optimising their work, leisure and consumption choices, and firms, maximizing their profits in competitive markets. Commonly, though not invariably, it was assumed that everyone in the economy had the same preferences, and the same relative endowments of capital, labour skills and so on, with the implication that it was sufficient to model the decisons of a single ‘representative agent’.
The classic general equilibrium analysis of Arrow and Debreu dealt with the (admittedly unrealistic) case where there existed complete, perfectly competitive markets for every possible asset and commodity, including ‘state-contingent’ financial assets which allow agents to insure against, or bet on, every possible state of the aggregate economy. In such a model, as in the early RBC models, recessions are effectively impossible – any variation in aggregate output and employment is simply an optimal response to changes in technology, preferences or external world markets. DGSE models modified these assumptions by allowing for the possibility that wages and prices might be slow to adjust, by allowing for the possibility of imbalances between supply and demand and so on, thereby enabling them to reproduce obvious features of the real world, such as recessions.
But, given the requirements for rigorous microeconomic foundations, this process could only be taken a limited distance. It was intellectually challenging, but appropriate within the rules of the game, to model individuals who were not perfectly rational, and markets that were incomplete or imperfectly competitive. The equilibrium conditions derived from these modifications could be compared to those derived from the benchmark case of perfectly competitive general equilibrium.
But such approaches don’t allow us to consider a world where people display multiple and substantial violations of the rationality assumptions of microeconomic theory and where markets depend not only on prices, preferences and profits but on complicated and poorly understood phenomena like trust and perceived fairness. As Akerlof and Shiller observe
It was still possible to discern the intellectual origins of alternative DSGE models in the New Keynesian or RBC schools. Modellers with their roots in the RBC school typically incorporated just enough deviations from competitive optimality to match the characteristics of the macroeconomic data series they are modelling, and prefer to focus on deviations that are due to government intervention rather than to monopoly power or other forms of market intervention. New Keynesian modellers focused more attention on imperfect competition and were keen to stress the potential for the macro-economy to deviate from the optimal level of employment in the short term, and the possibility that an active monetary policy could produce improved outcomes
Because New Keynesians were (and still are) concentrated in economics departments on the East and West Coast of the United States (Harvard, …) while their intellectual opponents are most prominent in the lakeside environments of Chicago and Minnesota, the terms ‘saltwater’ and ‘freshwater’ schools have been coined (by Krugman?) to describe the two positions. But such a terminology suggests a deeper divide between competing schools of thoughts than actually prevailed during the false calm of the Great Moderation. The differences between the two groups were less prominent, in public at least, than their points of agreement. The freshwater school had backed away from extreme New Classical views after the failures of the early 1980s, while the distance from traditional Keynesian views to the New Keynesian position was summed up by Lawrence Summer’s observation that ‘We are now all Friedmanites, Lawrence Summers’. And even these limited differences were tending to blur over time, with many macroeconomists, and particularly those involved in formulating and implementing policy shifting to an in-between position that might best be described as ‘brackish’.
However, the similarities outweigh the differences. Whether New Keynesian or RBC in their origins, DSGE models incorporate the assumption, derived from Friedman, that there is no long-run trade-off between unemployment and inflation, that is, that the long-run Phillips curve is vertical. And nearly all allowed for some trade-off in the short run, and therefore for some potential role for macroeconomic policy.
The differences between saltwater and freshwater DGSE models may be discussed in terms of the venerable Keynesian idea of the multiplier, that is, the ratio of the final change in output arising from a fiscal stimulus to the size of the initial stimulus. Old Keynesians had argued that the multiplier (as the name suggests) was greater than one since the beneficiaries of government expenditure would increase their consumption of goods and services, leading to more workers being hired who in turn would increase their own consumption and so on. The ‘policy ineffectiveness’ proposition of the New Classical school implied that the multiplier should be zero or even negative, because of the incentive-sapping effects of government spending and the taxes required to finance it. The DGSE modellers tended to split the difference.
Although the issue was rarely discussed explicitly, the DGSE models favored by the New Keynesian school typically implied values for the multiplier that were close to 1, while those derived from RBC approaches suggested values that were positive, but closer to zero. Given the mild volatility of the Great Moderation, such models yielded no justification for active use of fiscal policy, and good reasons for governments to maintain budget balance as far as possible. New Keynesians also typically rejected active use of fiscal policy, and relied exclusively on monetary policy to manage the economy, But, compared to their freshwater colleagues they had a more positive view of the ‘automatic stabilisers’. Since tax revenues tend to fall and welfare expneditures to rise during recessions a government that maintains a balanced budget on average will tend to run deficits during recessions and surpluses during booms. On a Keynesian analysis, the fact that government spending net of taxes is countercyclical (moves in the opposite direction to fluctuations in the rate of economic growth) tends to stabilise the economy. Vast numbers of journal pages were devoted to refining these different viewpoints, and to defending one or the other. But in practical policy terms, the differences were marginal
Reflecting their origins in the 1990s, most analysis using DSGE models assumed that macroeconomic management was the province of central banks using interest rate policy (typically the setting of the rate at which the central bank would lend to commercial banks) as their sole management instrument. The central bank was modelled as following either an inflation target (the announced policy of most central banks) or a “Taylor rule”, in which the aim is to stabilise both GDP growth and inflation.
On the whole, while central banks showed a some interest in DSGE models, and invoked their findings to provide a theoretical basis for their operations, they made little use of them in the actual operations of economic management. For practical purposes, most central banks continued to rely on older-style macroeconomic models, with less appealing theoretical characteristics, but better predictive performance. However, neither DSGE models nor their older counterparts proved to be of much use in predicting the crisis that overwhelmed the global economy in 2008, or in guiding the debate about how to respond.