Marty Weitzman on the equity premium

by John Q on August 15, 2004

Brad de Long points to a piece on the equity premium by Marty Weitzman and says,

Marty Weitzman is smarter than I am …This is brilliant. I should have seen this. I should have seen this sixteen years ago. I *almost* saw this sixteen years ago.

Weitzman’s idea[1] is the replace the sample distributions of returns on equity and debt with reasonable Bayesian subjective distributions. These have much fatter tails, allowing for a higher risk premium, lower risk free rate and higher volatility, in the context of a socially optimal market outcome. Here are some of the reasons why this is important

My immediate reaction is the same as Brad’s. Something like this has occurred to me too, but I’ve never thought hard enough or cleverly enough about it how to work it out properly. This is a very impressive achievement, and Marty Weitzman is very, very smart (which we already knew).

My second reaction is a little more sceptical. Some previous attempts at resolving the equity premium have focused on the tails of the distribution, and the possibility of catastrophic loss. Tthe problem was that it was difficult to describe an outcome where the return on equity was large and negative, but bonds were still a safe asset. The various catastrophic examples cited, such as hyperinflation, revolution and nuclear war all failed in this respect.

Applying the same reasoning to Weitzman’s argument, we need to consider whether there is a reasonable model of a stable capitalist economy, with functional financial markets, that produces a negative long-run rate of growth in outptu per person. The only one I can imagine is based on resource exhaustion, and I can’t really see belief (positive probability weight) in such a model being widespread enough to generate the observed equity premium. With less confidence, I’d assert that there are pretty good technological reasons to rule out a sustained rate of productivity growth (embodied and disembodied) of more than 5 per cent, for countries that are already at the frontier. The maximum sustainable rate of growth of output per person cannot be much above this.

I haven’t been able to check the math, but I doubt that a complete Bayesian explanation of the equity premium puzzle can be obtained if the prior distribution on the long-run rate of growth is bounded in this way.

My third reaction is eclectic. My general view is that there is no one explanation of the equity premium, but a set of problems with the standard consumption-based model of asset pricing (CCAPM) that interact to produce results radically different from those of the model. Making expectations Bayesian rather than classical will amplify the effects of any other deviation in the model, and therefore fit neatly into this story.

fn1. The only version of the paper I’ve seen so far is a PDF file in which the maths has not come through. But I think I’ve got the basic idea.

{ 23 comments }

1

Kimmitt 08.15.04 at 10:28 am

whether there is a reasonable model of a stable capitalist economy, with functional financial markets, that produces a negative long-run rate of growth in outptu per person.

I think that the “representative agent” model, intrinsic to this reasoning, doesn’t make a ton of sense.

2

dsquared 08.15.04 at 11:31 am

I’m most interested in the effect of this piece on the sociology of the econometrics profession. To my mind, it’s been known for ages that Bayesian approaches are vastly more suited to a whole raft of economics problems than classical statistics, but somehow, nobody can be bothered to use or teach them as standard. A high-profile article showing (to more or less devastating effect) how misuse of the classical version of the linear regression model led to untold wasted effort and needless puzzlement would be a most salutary paradigm (in the Kuhnian sense).

3

John Quiggin 08.15.04 at 11:36 am

kimmitt, I’m no fan of representative agent models and ex post heterogeneity is an important part of my preferred approach to the puzzle, developed by Greg Mankiw who is now Bush’s economic advisor.

DD, I agree entirely.

4

Nicholas Gruen 08.15.04 at 2:56 pm

John, DD, Standard operating procedure in economics I’m afraid. Massive amounts of effort without the slightest regard for the prospectivity of the vein one is working in. Thus we have whole fields that take decades of hard work to fizzle out into the dismal non-result that was predicted by those who thought sensibly about their prospectivity from the start – like strategic trade theory for instance. That’s not to say that all areas are like this, they’re not, but its infuriating to see how ubiquitous the mountaneers approach is. I used the techniques “because they were there”, not because it seemed plausible that they’d yield a worthwhile result. Oh well.

5

Nicholas Gruen 08.15.04 at 2:59 pm

John, tracking back to your earlier comments on the implications of the equity premium puzzle, you seem to limit the idea of govt participation in the economy to ownership of enterprises. There’s also share investment, which the govt can hold at arms length in a portfolio. Like you, what got me started thinking in this way was not the theory but observing the RBA doing something that Tobin spoke of – participating in markets so as to make them more rational. I am thinking of the RBA’s contrarian behaviour on the foreign exchange markets during the 1980s in particular. This was very benign. It smoothed peaks and troughs in the foreign exchange markets (improving their efficiency) and it made a pot of money for the Government. It stopped making so much money I think at around the time it came up with a very different additional strategy of ‘smoothing and testing’ (which may be sensible but seems pretty bereft of any well thought through theoretical foundation to me).

I have always thought there was a case for governments doing this more broadly in other markets. I’ve argued elsewhere that governments could do something similar by holding more debt and equity and also varying their exposure in a contrarian and countercyclical direction.

6

Maynard Handley 08.15.04 at 6:00 pm

John, you ask for a situation where stocks could land up doing rather worse than bonds that is not absurd.

You may not believe it is realistic, but an example of this would be capture of firms by management that is so complete that the bulk of profits are siphoned off as salary, bonuses etc, leaving miniscule dividends or retained profits, but while all laws continue to be followed, in particular bonds are repaid fully and on schedule.
As I said, it’s now arguable as to whether this will happen, and what roadblocks might occur along the way, but I don’t think it’s outright ludicrous.

7

John Quiggin 08.15.04 at 11:09 pm

Simon Grant and I looked at share ownership in the context of social security

Grant, S. and Quiggin, J. (2002), ‘The risk premium for equity: implications for Clinton’s proposed diversification of the social security fund’, American Economic Review 92(5), 1104-15.

On Maynard’s idea I agree that there are various possibilities of redistribution. These have very different implications from those of the case when negative returns on equity result from negative output growth.

8

dsquared 08.15.04 at 11:19 pm

To be honest, I have a lot of sympathy for Maynard’s point. I’ve never believed that there is any such thing as a risk-free asset (it’s always seemed to me to be a phrase rather like jumbo shrimp or military intelligence). I seem to remember that one of the earliest posts on D^2D had a competition for anyone who could name something that was a genuinely risk-free asset. I also seem to remember that Michael Froomkin won it, but I never sent him his prize, thus teaching the world a lesson about risk or something.

9

Dick Thompson 08.16.04 at 1:35 am

John, could you provide a link to that PDF? ‘twould be much appreciated.

10

Brett Bellmore 08.16.04 at 10:59 am

“With less confidence, I’d assert that there are pretty good technological reasons to rule out a sustained rate of productivity growth (embodied and disembodied) of more than 5 per cent, for countries that are already at the frontier. The maximum sustainable rate of growth of output per person cannot be much above this.”

Interesting; I’d say quite the opposite. Considering that much productivity today is the productivity of machines, not people, and that automation engineers are bcoming more and more successful at “closing the loop”, (Look at the phenomnenon of “lights out” factories.) it’s entirely plausible that in the relatively near future productivity in a great deal of the industrial economy won’t be tied to human labor. A rate of growth based solely on raw materials and energy contraints could, for at least a short while, exceed 100% per year.

But “sustained”? No, I suppose not, unless we got off planet. “Sustained” high rates of growth would run up against the limits of energy and the Earth’s heat budget fairly quickly. A typical “S” curve of growth as seen all the time in biology is more plausible.

11

John Quiggin 08.16.04 at 11:12 am

Brett, the problem here is that the activities where automation is feasible are, in most cases, shrinking relative to national income. Meanwhile, activities like health care are growing.

This occurs because, mostly, own-price elasticities are less than one.

12

John Quiggin 08.16.04 at 11:16 am

This is a near-final version of the Grant-Quiggin paper. As I mentioned, I don’t have access to a proper version of the Weitzman paper yet.

13

Mats 08.16.04 at 12:28 pm

“My general view is that there is no one explanation of the equity premium, but a set of problems with the standard consumption-based model of asset pricing (CCAPM)” – This standpoint seems safe enough. Consider the alternative: because of shortcomings in the basic description of uncertainty, economists have for decades failed to describe such basic entities in the economy as the approximate levels of asset returns (and what else…). An impressing display of smartness?

14

Andrew Edwards 08.16.04 at 2:31 pm

John:

Will you post a link when the Weitzman paper comes online?

I’m sure we’d all love to read it.

Otherwise I’ll trot down to the U of T campus and bug Henry for a copy. :-)

15

Dirk Jenter 08.16.04 at 3:07 pm

John,
I agree with your assessment that the equity premium puzzle is most likely caused by several factors. What bothers me about the Weitzman paper, and the blogosphere reaction to it, is that he is far from the first one to introduce model and / or parameter uncertainty into the consumption CAPM in a Bayesian framework. I am a corporate finance person, not an asset pricer, but I vividly remember sitting in seminars by Lubos Pastor and Rob Stambaugh at least six or seven years ago in which they presented models along these line. They even incorporated Bayesian learning into their models, at least if my memory doesn’t fail me. Also, Nick Baberis job market paper had a simple model of optimal portfolio choice under parameter uncertainty. I haven’t read the Weitzman paper yet, but my guess right now is that this is a case of somebody blissfully unaware of the literature in a field (that is not his) doing some work that has been done long before, and people who are equally unfamiliar with the field getting all excited because it is the first time they hear about the result.

16

John Quiggin 08.16.04 at 9:05 pm

Dirk, both Brad DeLong and I have published articles on the equity premium in major journals. I’ve even written a (so far unpublished), survey article. Describing us as “equally unfamiliar with the field” doesn’t seem entirely accurate.

The literature on the equity premium is gigantic, and no-one nowadays can be aware of every paper. In addition, as I point out in my post, it’s well-known that the basic idea has been tried before, most obviously by Riesz – Weitzman’s innovation is in having a fully Bayesian approach. I will chase the papers you mention, however.

17

Dirk Jenter 08.16.04 at 9:44 pm

John,
I apologize if I sounded disrespectful, I actually read your survey arcticle with Simon Grant (if that’s the one you mean) a few months ago and learned a lot from it. There was a lot of talk about both Bayesian learning and robust control approaches to tackling the equity premium puzzle in the Harvard Econ department when I did my PhD there (1997-2002), with John Campbell and Gary Chamberlain encouraging students to work along these lines. Nicholas Baberis had graduate just before I arrived and gone to Chicago, but came back as a visitor and taught asset pricing to us in my seond year. His job market paper on optimal portfolio choice under parameter uncertainty was published in the Journal of Finance in 2000. His modelling approach was really simple, but other people have by now added a lot of technical bells and whistles. A little later than Barberis, Lubos Pastor was doing his PhD at Wharton under Rob Stambaugh and worked on both model and parameter uncertainty. Lubos Pasor went to Chicago and you should be able to find his papers on this webpage there, including a very relevant paper written with Stambaugh.

18

Dirk Jenter 08.16.04 at 9:45 pm

John,
I apologize if I sounded disrespectful, I actually read your survey arcticle with Simon Grant (if that’s the one you mean) a few months ago and learned a lot from it. There was a lot of talk about both Bayesian learning and robust control approaches to tackling the equity premium puzzle in the Harvard Econ department when I did my PhD there (1997-2002), with John Campbell and Gary Chamberlain encouraging students to work along these lines. Nicholas Baberis had graduated just before I arrived and gone to Chicago, but came back as a visitor and taught asset pricing to us in my seond year. His job market paper on optimal portfolio choice under parameter uncertainty was published in the Journal of Finance in 2000. His modelling approach was really simple, but other people have by now added a lot of technical bells and whistles. A little later than Barberis, Lubos Pastor was doing his PhD at Wharton under Rob Stambaugh and worked on both model and parameter uncertainty. Lubos Pasor went to Chicago and you should be able to find his papers on this webpage there, including a very relevant paper written with Stambaugh.

19

John Quiggin 08.17.04 at 2:00 am

Thanks Dirk and sorry if I was a little touchy.

I downloaded the Pastor-Stambaugh paper on structural breaks from NBER, which looks interesting and relevant, but doesn’t seem to discuss the implied degree of risk aversion.Is that the one you meant?

Baberis also sounds interesting, though, as you say, slightly tangential. I’ll chase it as soon as I get some free time.

20

Dirk Jenter 08.17.04 at 5:26 am

John,
I mostly meant the earlier paper by Pastor and Stambaugh, I think from 1999, and Lubos Pastor’s job market paper, published in 2000. But instead on relying on the bad memory of someone who has not really been thinking about these papers for years, I walked across the hall and talked to some people who actually have a clue. Here is the recommended reading list I received for Bayesian models related to the equity premium puzzle: Anything by Kandel and Stambaugh, especially their 1996 JF paper. Anything by Brennan and Xia, especially their 2001 JME paper. Pietro Veronesi (1999 RFS). Lewellen and Shanken (2002 JF). Bansal and Yaron (2004, forthcoming JF). I do not know how complete this list is, but I think it confirms my impression that Weitzman is probably not the first one to apply Bayesian methods to optimal portfolio selection and / or the related equilibrium models.

21

Jason 08.17.04 at 2:18 pm

Dan/John,

I have a question (since I don’t know all that much about the Equity gap literature).

Almost all of the papers or discussions I see focus on the fact that the “average rate of return” for equities is too high. Isn’t average rate of return the wrong thing to be measuring because we are dealing with products not sums?

As an extreme case, suppose 1 year I have a 200% return, and the next year I have a -100% return, then my average rate of return is 50%, despite me having no money at all. The measurement should be the expected value of the log of 1 + the return (E log (1+r)) rather than Er. Of course, this doesn’t take into account time preference as easily but it does demonstrate that average rate of return will overestimate actual return (Jensen’s inequality gives it generally).

22

woodturtle 08.17.04 at 8:23 pm

I thought the intent of the post was to introduce some new vocabulary words, and they are some doozies. I’m about one-third the way through.

All I can say about that guy- he needs to get out more.

23

dsquared 08.17.04 at 9:39 pm

Jason; yes, absolutely, but it turns out that even if you calculate it correctly, the problem is still there.

Comments on this entry are closed.