Refuted economic doctrines #9: Real Business Cycle Theory

by John Q on June 25, 2009

Yet another in my series of articles on economic theories, empirical hypotheses and policy programs that have been refuted, or undermined, by the Global Financial Crisis. This one, on Real Business Cycle Theory, is a bit econowonkish, but I’m putting it up here because
(a) I hope some econowonks among the readers might find errors and correct me*
(b) Judging by some other recent commentary, RBC still has some interest.

* As indeed, they have. My suggestion of a link between calibration and the GMM has been roundly refuted both here and at my blog. I can only say, it seemed like a good idea at the time. Thanks for the very useful comments on this point, and on RBC more generally.
Also, Lee Ohanian has pointed out that I misattribute to him and Cole the treatment of WPA workers as unemployed.

Real Business Cycle theory emerged in the early 1980s as a variant of New Classical Economics (of which more soon, I hope). The big papers were by Plosser & Long and Kydland & Prescott. The RBC literature introduced two big innovations, one theoretical and one technical.

In theoretical terms, relative to the standard New Classical story that the economy naturally more rapidly back towards full employment equilibrium in response to any shock, RBC advocates recognised the existence of fluctuations in aggregate and employment but argued that such fluctuations represent a socially optimal equilibrium response to exogenous shocks such as changes in productivity, the terms of trade, or workers’ preference for leisure.

In technical terms, RBC models were typically estimated using a calibration procedure in which the parameters of the model were adjusted to give the best possible approximation to the observed mean and variance of relevant economic variables and the correlations between them (sometimes referred to, in the jargon, as ‘stylised facts’). This procedure, closely associated with a set of statistical techniques referred to as the Generalized Method of Moments, differs from the standard approach pioneered by the Cowles Commission in which the parameters of a model are estimated on the basis of a criterion such as minimisation of the sum of squared errors (differences between predicted and observed values in a given data set.

There’s no necessary link between these two innovations and there gradually emerged two streams within the RBC literature. In one stream were those concerned to preserve the theoretical claim that the observed business cycle is an optimal outcome, even in the face of data that consistently suggested the opposite. In the other stream were those who adopted the modelling approach, but were willing to introduce non-classical tweaks to the model (imperfect information/competition and so on) to get a better fit to the stylised facts.

In a sense, the latter stream of RBC literature converged with New Keynesianism, which also uses non-classical tweaks to standard general equilibrium assumptions with the aim of fitting the macro data.

On the whole, though, the two groups tend to reflect their intellectual roots in one critical respect. Those in the RBC tradition tend to seek tweaks that are as modest as possible, and assume that any departures from the optimal business cycles described by the founders of the school are correspondingly modest. By contrast, the central point of New Keynesianism is that modest tweaks to the classical assumptions can produce aggregate outcomes that are far from optimal.

But as far as the global financial crisis is concerned, this difference is not all that material. As with the New Keynesians, RBC economists haven’t had much useful to offer, and this is, if anything more true of those who’ve tweaked the classical assumptions than of the smaller group who’ve stayed true to the original program.

Coming back to the original program, the big exception that was conceded by most RBC theorists at the outset was the Great Depression. The implied RBC analysis that the state of scientific knowledge had suddenly gone backwards by 30 per cent, or that workers throughout the world had suddenly succumbed to an epidemic of laziness was the subject of some well-deserved derision from Keynesians. A couple of quotes I’ve pinched from a survey by Luca Pensieroso

“the Great Depression [. . . ] remains a formidable barrier to a completely unbending application of the view that business cycles are all alike.” (Lucas (1980), pg. 273.) “If the Depression continues, in some respects, to defy explanation by existing economic analysis (as I believe it does),
perhaps it is gradually succumbing under the Law of Large Numbers.” (Lucas (1980), pg.284)

But towards the end of the 1990s, at a time when RBC theory had in any case lost the battle for general acceptance, some of the more hardline RBC advocates tried to tackle the Depression, albeit at the cost of ignoring its most salient features . First, they ignored the fact that the Depression was a global event, adopting a single-country focus on the US. Then, they downplayed the huge downturn in output between 1929 and 1933, focusing instead on the slowness of the subsequent recovery which they blamed (surprise, surprise) on FDR and the New Deal. The key paper here is by Cole and Ohanian (following a line of argument suggested by Prescott). Cole and Ohanian seem to be the main source for Amity Shlaes’ Forgotten Man (I haven’t read it, will try to get to this). As readers may recall, Cole and Ohanian (and Shlaes) reclassify WPA workers as unemployed to make the post-1933 period look worse. They put particular emphasis on the National Industrial Recovery Act.

There are plenty of difficulties with the critique of the New Deal, and these have been argued at length by Eric Rauchway among others. But the real problem, is that RBC can’t possibly explain the Depression as most economists understand it, that is, the crisis and collapse of the global economic system in the years after 1929. Instead, Cole and Ohanian want to change the subject. The whole exercise is rather like an account of the causes of WWII that starts at Yalta.

The failure of RBC is brought into sharp relief by the current global crisis. Not even the most ardent RBC supporter has been game to suggest that the crisis is caused by technological shocks or changes in tastes, and the suggestion that it was all the fault of a minor piece of anti-redlining law (the Community Reinvestment Act) has been abandoned as the speculative excesses and outright corruption of the central institutions of Wall Street has come to light.

Unlike New Keynesian macro, where some useful insights will be relevant to policy in future periods of relative stability, it’s hard to see anything being salvaged from the theoretical program of RBC. On the other hand, it has given us some potentially useful statistical techniques.

More importantly, it was a concern with the magnitudes of variances and correlations that led Prescott (along with Mehra) to observe the equity premium puzzle which, I think, will play a crucial role in the development of a more satisfactory macro theory.

{ 25 comments }

1

dsquared 06.25.09 at 11:44 am

Just two comments:

1) Is calibration really all that closely related to GMM? GMM in most econometric applications is basically used as an estimating technique which will give you the same answer as the maximum likelihood estimate in a lot of cases. Calibration’s a fundamentally different approach – by doing a calibration rather than an estimate, you’re basically saying that you don’t really care about significance because you’ve got the One True Model and any departures from it are the fault of reality. I agree that a lot of the same people used them and there is a certain Wittgensteinian family resemblance, but the splitting of the two streams actually IMO represents a quite fundamental philosophical difference.

2) IIRC, with respect to the Great Depression, Prescott at least did actually bite the bullet and write papers in which he demonstrated to his own satisfaction that it was a decade-long shirk.

2

John Quiggin 06.25.09 at 11:50 am

I’m not sure about the calibration/GMM point. They seem to me to go together in the sense that you pick your parameters by the requirement that the model should produce (or get as close as possible to) some particular outputs, most of which are moments/comoments. But this idea only occurred to me recently, and I’ve only tried it out on a couple of people, so more comments would be great.

On (2), a reference would be marvellous. You don’t want to force me to resort to Google do you?

3

dsquared 06.25.09 at 11:52 am

The key paper here is by Cole and Ohanian (following a line of argument suggested by Prescott). Cole and Ohanian seem to be the main source for Amity Shlaes’ Forgotten Man (I haven’t read it, will try to get to this).

Could I implore you not to? Myself and Jacob Levy are trying to push back against this destructive social convention of an implied duty to read worthless books .

4

dsquared 06.25.09 at 12:09 pm

My point on 1) was only that GMM can reject a model but calibration won’t. You do calibration when your model doesn’t fit the data but you don’t care. (Sargent basically admits as much in this interview).

This is my ref on 2; I think it’s a fair summary. Also note that Prescott believes that on the eve of the 1929 crash, stocks were undervalued

5

John Quiggin 06.25.09 at 12:22 pm

I wasn’t actually going to read it, just check the index in Google books :-),

6

Barry 06.25.09 at 1:49 pm

dsquare, after a quick skim, I have to disagree with your example from Prescott supporting his belief that “that it was a decade-long shirk”. That’s mainly because I can’t find much for substance in that paper (again, on a quick skim). This paper reads like an introduction to an actual article, or a quickie pounded out to at least get a C in a class assignment. He *sorta* compares the USA in the 1930’s to Japan in the 1990’s and France in the 1930’s, but doesn’t do anything with it (note – France went through something called ‘World War I’; that really messes with a USA/France 1930’s comparison).

The title ‘Some Observations on the Great Depression’ is literally true.

Of course, given Schlaes’ work (read Chait’s review here), vacuousness is the norm for Great Depression revisionism. Probably like it is for intelligent design, and for the exact same reasons.

7

Walt 06.25.09 at 2:03 pm

dsquared is right. GMM is an ordinary statistical technique, and is closer to the Cowles commission approach than it is to calibration. It was a new technique, one that’s good at testing RBC-type models, so in that sense it was a break from the older approaches, but it still fits comfortably in the classical statistical paradigm.

8

Donald A. Coffin 06.25.09 at 3:50 pm

Here’s more from Prescott (and Kehoe):

http://www.greatdepressionsbook.com/index.cfm (The book itself seems not to be on-line, but you can link to everyone’s datasets if you’re masochistic)

Peter Temin wrote a critique in the Journal of Economic Literature,:
“Real Business Cycle Views of the Great Depression and Recent
Events: A Review of Timothy J. Kehoe and Edward C. Prescott’s Great Depressions of
the Twentieth Century,” Journal of Economic Literature, 46(3), 669–84

And their response to Temin
http://www.minneapolisfed.org/research/sr/SR418.pdf

Temin’s rejoinder:
http://www.econ.umn.edu/~tkehoe/papers/TeminRejoinder.pdf

Another critique:
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WWT-4GC1RPF-1&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=939886540&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=7bcd17f8e21d7a236c0e4457b0b7d89f

More, I’m sure than you wanted to know.

And, having read the piece that dsquared links to above, I have to say that his “argument” more-or-less begins by assuming the conclusion. Fr example, he writes: “Given this change in normal market hours, growth theory predicts the behavior of investment and employment that occurred in the 1930s.” Well, if you assume that the change in hours worked is a change in equilibrium hours worked, you’ve just assumed away the depression as a macroeconomic event, haven’ you?

9

dsquared 06.25.09 at 4:21 pm

Well, if you assume that the change in hours worked is a change in equilibrium hours worked, you’ve just assumed away the depression as a macroeconomic event, haven’ you?

Well, either that, or you’ve assumed that the shift in hours worked was a shift in the equilibrium, which is equivalent to assuming it was a shirk.

10

brown 06.25.09 at 5:52 pm

the key difference between GMM (or minimum distance estimation in general) and pure calibration is that GMM/MDE produce standard errors for your parameter estimates. With calibration, you do not produce standard errors. You pick parameter values in a more or less ad-hoc fashion, by appeal to various authorities, and then simulate data from the model using these preset values. then, you compare the means (or other functionals of the simulated distributions) to observed quantities in the real data. standard errors don’t enter into it so you don’t bound confidence intervals around your simulated means etc.

CMD (which has been around for ages) in this set up would provide you with a criterion function for evaluating the difference between your model simulated distribution and the empirical distribution. in fact, as mark watson showed, when you try to fit other moments of the empirical distribution (which were not used in the matching exercise) RBC gives you horrible results ….. the whole field of indirect inference in IO uses exactly this CMD approach to estimate parameters.

The calibrationists in fact _consciously_ resisted adopting a CMD approach for their model fitting exercises. their approach was: let’s pick reasonable values for the discount rate, the parameter governing risk aversion, depreciation of capital stock and the variance matrix of the shocks (which were usually normally distributed). then, let’s simulate the model using these fixed values and look at the mean or variance of output or its time path and see how it compares to the corresponding statistics in the real data.

clearly, there is scope to go one step further here and the natural step is to teh CMD route (specify a metric for the difference between your simulated distribution and real distribution and then choose parameters that minimize this distance). but for the RBC types, this is _not_ the point (and I have never understood why, although I think there is work that does so thin).

In any case, you see how there are no standard errors/ confidence intervals etc in the calibration approach ..

11

jrc 06.25.09 at 6:22 pm

” On the other hand, it has given us some potentially useful statistical techniques.”

Meaning… If you want to make whatever model you decide to support look like the real world, you just need to throw a random variable in front of some actually endogenous variable and then fit it’s distribution with something that is estimated from economic history. These models only mimic the real world (and mimic it only in terms of similar means and variances – not actual behavior) because they are rigged to do just that, with all of the dynamics being imported exogenously. In general, these models seem to require continuous shocks to maintain dynamic movement…otherwise, an initial impulse fades away quite rapidly and things return to steady state very quickly. Essentially, to get any of the DSGE (dynamic stochastic gen-eq) models to give you those neat, RBC looking squiggly graphs, they need to be hit by randomly generated shocks every period. Clearly, every period, everyone just decides that they will aggregately be more productive, or like leisure more (complicated here, especially since these leisure choices really look more like unemployment…how many of us can actually say I’m going to work only 6.5 hours/day this quarter), or whatever each and every quarter.
The DSGE project started, from what I gather, as a response to a very smart epistemological argument. It argued that traditional econometric predictions could never be useful when something fundamentally changed…they could only predict based on the past and assume the present was the same. The critique is right, but the solution has been to put all the emphasis on the exogenous and, being economists, we don’t set about understanding them. (Note from quick edit: the emphasis in thinking about the models is on the endogenous, but the work is all being done by the exogenous RVs). We just call everything we don’t understand random and then backwardly engineer distributions so that, if they were actually completely random, our models could generate data similar to what we see. But this poses a whole different epistemological problem, namely, our models have nothing really interesting to say about why these fluctuations occur as they do…the response to impulses is understood from the micro groundings, but the impulses themselves (the really interesting economic question for macro) are completely disregarded. It’s not like anyone would actually believe that productivity is stochastically determined in any given period…what could that possibly mean? A machine rolls a die to decide how many ball bearings it will produce today (and even here, shouldn’t some LLN and CLT hold, smoothing things on aggregate)? It’s just that people seem to think….let’s develop these models, and then if someone actually ever figures out why productivity changes from period to period, maybe we can incorporate that, but that’s not really our problem. Our problem is finding a model that can make squiggly lines that look like business cycles.

12

tcspears 06.26.09 at 7:11 am

Your point about its inadequate account of the Great Depression aside, it seems to me that RBC theory hasn’t so much been refuted by the current downturn, but that it just doesn’t have anything useful to say about its causes. Through the lens of RBC theory, the downturn was caused by a productivity shock that originated in the financial sector (I don’t think there’s any requirement for shocks in RBC models to be “technological” — total factor productivity is determined by things other than technology). But this shock is completely exogenous to the RBC model itself — it largely treats the financial sector as something exogenous.

13

Alex 06.26.09 at 8:06 am

but the impulses themselves (the really interesting economic question for macro) are completely disregarded. It’s not like anyone would actually believe that productivity is stochastically determined in any given period…what could that possibly mean? A machine rolls a die to decide how many ball bearings it will produce today (and even here, shouldn’t some LLN and CLT hold, smoothing things on aggregate)?

Hence, as I like to call it, the need for realistic economics. Look, there’s a whole looming alp of memetic high ground here, and someone better get up here and claim it before someone else does…

14

marcel 06.26.09 at 7:15 pm

John’s account has inspired me to take a longer historical view of macroeconomics,

A Potted History of Macroeconomics:

Before, during and shortly after WW2:
1) Keynes’s model of the macro-economy, as interpreted by Hicks, Hansen and Modigliani

2) The Cowles Commission’s work on statistical problems related to understanding the business cycle, esp. estimation of systems of simultaneous equation

3) Cowles Commission looks for models to estimate, including input-output models (Marschak and Andrews, Econometrica 1944), the consumption function (Haavelmo, JASA 1947; and Girshick and Haavelmo, Econometrica 1947) and early (traditional, i.e., Keynesian) macroeconomic models (Klein, “Economic Fluctuations in the US”, 1950).

1950s-1960s
4) Following this period, we get much theoretical and econometric work; the focus of the theoretical work is to improve understanding of the various components of the NIPA that had become central to macro models. The purpose of the work in econometrics was to improve the models’ fit of the data, and out of sample forecasting, and to develop better statistical estimators.

4a) The econometric work culminates in the macro models identified with Ray Fair, DRI and the Fed-MIT-Penn. (1950s to the present). The most important analysis that did not itself present a new extension to a model or a new estimator was Adelman*2 (Economica, 1959), entitled, “Dynamic Properties of the Klein-Goldberger Model”. I believe this is the first use of a macroeconometric model for simulations.

4b) The pertinent theoretical work is, I think, Modigliani’s (1954) and Friedman’s(1957) models and studies of consumption. I think (not sure) that these were the first attempts to analyze aggregates as the result of a single agent’s solution to an optimization problem, i.e., the Representative Agent (RA)

5) Lucas and Rapping (1969) extend the RA to labor supply.

1980s-2000s
6) Kydland & Prescott (1982) combine the RA with the Adelmans’ work in a simple GE model of the macro-economy, the first RBC model. They did not estimate this model because, in Sargent’s words,

“Calibration is less optimistic [than MLE] about what your theory can accomplish because you’d only use it if you didn’t fully trust your entire model, meaning that you think your model is partly misspecified or incompletely specified, or if you trusted someone else’s model and data set more than your own. My recollection is that Bob Lucas and Ed Prescott were initially very enthusiastic about rational expectations econometrics. After all, it simply involved imposing on ourselves the same high standards we had criticized the Keynesians for failing to live up to. But after about five years of doing likelihood ratio tests on rational expectations models,I recall Bob Lucas andEd Prescott both telling me that those tests were rejecting too many good models. The idea of calibration is to ignore some of the probabilistic implications of your model, but to retain others. Somehow, calibration was intended as a balanced response to professing that your model, though not correct, is still worthy as a vehicle for quantitative policy analysis.”
(Macroeconomic Dynamics, 2005).

7) Eventually, 3 things happen to RBC models:

7a) As they are elaborated (1980s-1990s), New Keynesians begin to play with these models, giving them features and dynamic behavior that they believe appropriate;

7b) The theoretical models are used to generate econometric models, complete with equations that can be estimated;

7c) People stop referring to RBC models and start referring to Dynamic Stochastic GE models (DSGE).

Conclusion
What macroeconomics is left with, after roughly 40 years of the rational expectations hypothesis is:

a) a new method for generating models for policy experiments, and perhaps for forecasting
b) new econometric techniques for estimating these models, and new algorithms for solving them.
c) much stronger reliance on the RA than ever before, as a way to enforce a strong link between theory – optimizing behavior – and analysis.

The parallels between the Keynesian revolution and the RE revolution are striking. Both start with a basic model of the economy. Eventually, technology comes along to estimate the model. Either before or after this point, people use the model not just for understanding how different pieces of the economy fit together, but to analyze the dynamic structure of the macroeconomy, especially in support of their preferred policies. Both have critics all along the way, and (at least for American economists) whether you are a critic or a supporter maps pretty well to your political views (less true once New Keynesians make DSGE models their own). After about 30-40 years, both come acropper, the Keynesian due to the shifting Phillips Curve in the late 1960s & 1970s, and the associated stagflation; the RE due to the widespread belief (and its eventual obvious falsity) that markets are self-regulating and self-correcting, and that a financial crisis on the order of 1929-1933 is no longer possible.

15

Robert Waldmann 06.26.09 at 10:07 pm

As argued by D-Squared GMM and RBC happen to have both been developed near great lakes but don’t have anything in particular in common. Calibration requires parameter estimates which come from some other data set, but all estimates and methods of estimation are relevant to calibrators (who of course are suspected of choosing whichever parameters they can find in the literature which make the calibrated model fit the data — in any case that’s what i do).

On economic history, Lee and Ohanian used the once standard calculations of Stanley Lebergott (sp?). They were the official numbers in the Statistical Abstract of the United States until economic historians convinced the editors that they were silly.
The exclusion of WPA workers (but *not* other public sector employees) from the employed was certainly based on ideoological hostility to the new deal. Lebergott’s methodological note bases the choice on the assertion that the WPA did not differ in any relevant regard from Buchenwald and the Gulag (i am *not* exagerating). I’d guess you know this as I got if from Rauchway, so your description of who lost the millions of workers leaves out Lebergott for brevity.

However, his numbers were standard. I’ve used them myself. It’s not totally fair to Lee and Ohanian to give them all of the blame for the definitional choice.

The blaming WWII on Yalta analogy is, however, perfect.

16

John Quiggin 06.26.09 at 11:22 pm

Robert, thanks for this info. I had missed the details on Lebergott, so that will be helpful. Although I can’t recall who was involved (maybe an article in JPE?), I’m sure that disputes over the classification of WPA workers were going on decades ago, with much the same ideological overtones.

17

Mark A. Sadowski 06.28.09 at 4:40 pm

John Quiggin,
I know you intended this as a serious post but I could not read it and all of the excellent comments without laughing hysterically. I’ve always had a very low regard for RBC. Unfortunately my graduate program insisted on stuffing my head with useless DSGE models. In any case one day when I was feeling particularily frustrated with all the nonsense I was compelled to study I took a Wikipedia entry on RBC and changed it to write the following. I suspect this is a good place to share it:

“Hallucinogenic Business Cycle Theory (or HBC Theory) is a class of psychedelic macroeconomic models in which business cycle fluctuations to a large extent can be accounted for by imaginary (in contrast to reality based) shocks. (The four primary economic fluctuations are the gold rush, the bubble (deviation from trend), the counterintuitive movement, and mass hysteria (also known as “the panic” in classic terminology).) Unlike other leading theories of the business cycle, it sees recessions and periods of economic growth as an artificial response to illusory changes in the hallucinatory economic environment. That is, the level of national output necessarily minimizes the irrationally expected utility, and government should therefore concentrate on pretending to make short-run policy changes and intervene through random statements of make-believe fiscal or monetary policy designed to actively and whimsically whip the general public into a false sense of security.

According to HBC theory, business cycles are therefore “hallucinogenic” in that they are based on complete fantasy, and are the most inefficient possible operation of the economy, even given its seemingly perpetually unviable nature. It differs in this way from other theories of the business cycle, like Keynesian economics and Monetarism, which see asset bubbles as being untenable, and recessions as having tangible causes, which lead to what are known as “real-world repercussions.”

An important principle underlying HBC Theory is the principle of irrational expectations. Irrational expectations theory defines this kind of expectations as being identical to a wild guess about the future (a preposterous forecast) that systematically ignores, or thoroughly misinterprets, all of the available information. However, even with an unlimited number of additional assumptions, this theory of expectations indetermination still makes the prediction that human behavior will still be completely capricious and herd like. Thus, it is assumed that outcomes that are being forecast differ arbitrarily or unpredictably from the market disequilibrium results. As a result, irrational expectations differ substantially from disequilibrium results. That is, it assumes that people systematically make errors when predicting the future, and deviations from common sense happen consistently. In an economic model, this is typically modeled by assuming that the expected value of a variable is equal to a spontaneous error term representing the role of ignorance and mistakes plus the value of some completely irrelevant piece of information (such as the price of tea in China).”

P.S. Is it really safe to say that the theoretical program of RBC is dead? People are still quoting Robert Lucas as though he ever really knew what he was talking about. (Like perfectly vertical LM curves. Hahahahahahaha!)

18

Tim Wilkinson 06.28.09 at 10:38 pm

Alex @13 Hence, as I like to call it, the need for realistic economics.

What do we want? Realistic economics! When do we want it? In the short-to-medium term, subject to prevailing conditions!

19

Lee Ohanian 06.29.09 at 9:22 pm

There is an important factual mistake in this thread. Cole and I include all working hours in our analysis. Rauchway’s critique is about whether work-relief workers were counted, as they are in Darby’s paper, or whether they are not, as in Lebergott. This does lead to a debate between him and Shlaes, but not Cole-Ohanian.

Almost all of the increase in GDP between 1933-39 is from productivity, not recovery in hours worked, and GDP per person remains 27 percent below the normal 2 percent trend at the end of the decade. Cole and I attribute the failure of hours to recover from polices that promoted monpoly and high wages in some sectors. This is very far from the type of RBC models that the threat is about. In fact, the Cole-Ohanian model is in some ways closer to models with involuntary uenmployment than the original Kydland-Prescott model.

For those interested in more detailed discussion of this, see:

“http://www.econ.ucla.edu/files/senate_testimony_april_04_2009_ohanian.pdf”:http://www.princeton.edu/~rbenabou/groupthink%20iom%204l%20new2.pdf

20

John Quiggin 06.29.09 at 10:34 pm

Thanks for this correction, and apologies for the error in the post. Rechecking, Rauchway’s critique concerns the suggestion that the decline in working hours per fulltime employee should be treated as a macro shock, rather than as a benefit of increased productivity, a trend that continued through the postwar boom. Any response on this point?

21

Mark A. Sadowski 06.30.09 at 12:28 am

John Quiggin,
I’m afraid I may have drawn Lee Ohanian’s attention to your post. You see I was busy criticizing one of his articles at Forbes and mentioned your post here:

http://www.forbes.com/2009/06/16/stimulus-arra-government-spending-krugman-prescott-opinions-contributors-ohanian.html

That being said I found his comment illuminating (in a Freudian sort of way). Evidently he wants to distance himself from “the type of RBC models that the threat is about.” (Yes, in fact, “t” is nowhere near “d” on the ASCII keyboard.)

22

John Quiggin 06.30.09 at 12:48 am

No apology needed. I want to detect mistakes like the one Lee Ohanian pointed out before I present them in some medium less error-tolerant than a blog.

And, as an outsider, I’m obviously interested in distinctions within the New Classical/RBC school.

23

Lee Ohanian 06.30.09 at 8:09 pm

Thanks for the correction, I appreciate that. Regarding hours per worker, the senate testimony I posted discusses this in some detail. Here is a brief summary: hours per worker are low during the New Deal because of explicit work-sharing. Hours per worker rise in the postwar period once the economy is back on trend. It is a challenge to argue that the hours per worker are low during the New Deal because of rising wealth, which is the arguement made by Delong. This implies that leisure is a luxury good (that is, a good with an income elasticity of demand greater than one), and luxury goods tend to decline substantially during economic declines.

Regarding comparing the work I did with Cole vs. Kydland and Prescott, as noted by Mr. Sadowski. The point I wish to make is that there are many models routinely used today that have as a foundation Kydland and Prescott, but have very different policy implications. It is hard to call these other models RBC models, which typically refers to a one-sector model with a representative agent (perfect risk-sharing) with random changes to productivity, and in which allocations are optimal. There really isnt a useful comparison between Kydland-Prescott, and what in the 1970s was called new classical macroeconomics, which had to do with whether anticipated monetary policy was neutral. If you add money to Kydland and Prescott, which many have done, anticipated monetary policy is indeed non-neutral.

Regarding comparisons between Kydland and Prescott and more recent work, my model with Cole has what some would call “structural unemployment” and is an optimizing insider-outsider model. About 20 years ago, Aiyagari developed a quantitative model with uninsurable risk, (a non-representative agent model), but that was clearly related to Kydland-Prescott. At about the same time, Danthine and Donaldson were working out models with imperfectly flexible prices and wages within the Kydland-Prescott model. Kocherlakota, Golosov, and Tsyvinksi and others have developed quantitative models with repeated principal-agent issues. All of these at some level have as antecedents Kydland-Prescott, but clearly differ along a number of dimensions.

24

Kevin Donoghue 06.30.09 at 9:35 pm

I think Lee Ohanian meant his link to go to this PDF file. Other than that all I have to say about RBC models is (1) I’m glad we didn’t have to do that stuff in my day and (2) it’s great to see guys like Lee Ohanian show up in blog threads like this to defend their work. It makes me feel there’s hope for economics yet, and for the blogosphere.

25

John Quiggin 07.01.09 at 12:08 am

Sticking to the hours worked question, we can partition the decline in hours worked into as follows, I think
(i) Declining E/P ratio due to Depression
(ii) Trend decline in hours per employed person
(iii) Additional decline in hours per employed person due to increased union bargaining power (since workers with monopoly power will prefer higher wages and lower hours than would arise in a competitive market or one with monopsonistic elements)
(iv) Additional decline in hours per employed person associated with the Depression (below the level that would be the preferred bargain for workers)

Of these, it seems as if (i) and (iv) are unambiguously bad, (ii) is unambiguously good, and (iii) is good for workers but bad for employers (with a second-order net efficiency loss). As noted the size of (i) is the subject of the Rauchway vs Darby/Shlaes debate.

Is there any good evidence on the relative magnitudes of (ii), (iii) and (iv)?

Comments on this entry are closed.