Brad DeLong carries on the discussion about discounting and the Stern Review, responding to a critique by Partha Dasgupta that has already been the subject of heated discussion. As Brad says, all Dasgupta’s assumptions are reasonable, and his formal analysis is correct
But … The problem I see lies in a perfect storm of interactions:
This brings me to one of my favorite subjects: the equity premium puzzle and its implications, in this case for the Stern Review. I’ll try and explain in some detail over the page, but for those who prefer it, I’ll self-apply the DD condenser and report
Shorter JQ: It’s OK to use the real bond rate for discounting while maintaining high sensitivity to risk and inequality.
Stern, and nearly everyone else in the debate so far uses a model based on expected utility theory. There are very strong reasons to go this way. First, expected utility has the property of dynamic consistency, which means that, if you make a plan, anticipating all possible contingencies, you’ll want to continue with that plan over time, whichever contingency arises. No other choice model has this property except under special conditions.
Second, expected utility theory allows a single utility function that simultaneously determines attitudes to intertemporal wealth transfers, interpersonal redistribution and risk reduction (transfers of income between states of nature). With the plausible technical assumption of constant relative risk aversion, (almost) everything is determined by a single parameter (called eta in the Stern report), which measures the substitution of elasticity of income.
The big problem is that observed market outcomes aren’t consistent with EU theory. This problem is partly because people don’t act in accordance with EU (as shown in experimental studies) and partly because markets don’t work in the smooth and frictionless way assumed in standard finance-theory models.
The most important problem in this respect is the ‘equity premium puzzle’, and the closely-related ‘risk-free rate puzzle’. The equity premium puzzle is that for plausible choices of eta, the real bond rate should be somewhat higher than it has been on average (it’s close to the ‘correct’ rate at present), and the rate of return to equity much lower.
Historically, real returns to investors from the purchases of U.S. government bonds have been estimated at one percent per year, while real returns from stock (“equity”) in U.S. companies have been estimated at seven percent per year, a difference of six percentage points. By contrast, for reasonable choices of eta, the difference should be no more than half a percentage point. The equity premium puzzle can be resolved by assuming very high values of eta since risk aversion increases the premium. But high values of eta imply a high discount rate, so the risk-free rate puzzle is made worse.
There’s no generally agreed way of resolving the equity premium puzzle, but, as I’ve suggested above, the explanation should reflect some combination of individual preferences and market failure. If you accept that you get a couple of policy conclusions.
(i) When discounting riskless cash flows, the real bond rate is appropriate for governments and private individuals
(ii) When valuing risky cash flows received by individuals, the (large) market premium for risk should be applied
(iii) (Less general agreement on this one) When valuing risky cash flows received by governments, the (small) premium derived from expected utility theory should be used.
(iv) Any attempt to apply EU reasoning consistently across domains of time, risk and income distribution will lead, as Brad says, to a perfect storm of contradictions.
With this in mind, we can look again at some contributions to the debate.
Stern uses a low discount rate but wants to use a high risk premium when considering uncertainty and income distribution. In my view, this is reasonable.
Nordhaus wants a high discount rate on riskless income to match market data, but this data concerns risky cash flows.
Dasgupta shows that applying Stern’s eta in a world with an unlimited supply investment opportunities yielding 4 per cent produces implausible outcomes. The 4 per cent rate sounds reasonable because expected returns to capital are generally higher than this. But the riskless market bond rate is only 1 or 2 per cent. If an unlimited supply of riskless 4 per cent investments actually existed, an unlimited arbitrage would be possible. This is another way of looking at the perfect storm problem.
DeLong wants sensitivity analysis for higher values of eta. This is a sensible move if you’re committed to pushing applied utilitarianism to the limit, but I suspect that a different approach will be needed in the end.
{ 14 comments }
talboito 12.09.06 at 1:22 am
It seems Mandelbrot’s argument about all this equity premium stuff says equities are actually much much riskier than standard models would claim.
thetruth 12.09.06 at 3:12 am
“if you make a plan, anticipating all possible contingencies, you’ll want to continue with that plan over time, whichever contingency arises”
That’s called a martingale, isn’t it?
Michael Greinecker 12.09.06 at 9:19 am
No, martingales are certain stochastic processes.
The idea of time consistency is that if you prefer two apples tomorrow to three apples the day after tomorrow, you shouldn’t change your mind tomorrow and trade in the two apples for three apples the day afterwards.
marcel 12.09.06 at 10:05 am
Why is time-consistency important here? It was first raised as an issue in a context where reputation influenced outcomes.
Consider this example, which I think is relevant to climate change.
1) We have very incomplete information about the future: considerable ignorance about not only the probabilities of potential outcomes, but also of the contents of the event space – the set of possible outcomes.
2) Nevertheless, it appears that some action is a wise step.
2a) Based on current knowledge, including recognition of our ignorance, we choose an action that appears most prudent, i.e., optimal in light of what we know and know we don’t know (shades of Rumfeld).
3) At a later date, with more information, we see that a different choice would have been better. With this greater information, we can see that even with the choice previously made, it is now possible to improve the (likely) outcome by modifying our prior decision; i.e., choosing a new action.
Of course, assumption (1) makes EU theory inappropriate, since that requires a complete listing of both possible outcomes and their associated probabilities. (Step (3) may as well, but that’s not relevant here.) Nevertheless, I don’t see the importance of reputation in determining the outcome, and more to the point, I don’t see the importance of time-consistency.
JQ, enlight me.
aaron 12.09.06 at 2:35 pm
I think trying to model a discount rate is silly. Just use the straight up expected real gdp growth rate.
aaron 12.09.06 at 3:54 pm
(for a no AGW senario).
John Quiggin 12.09.06 at 5:16 pm
Marcel, you’re a step ahead here. The dynamic consistency properties of EU depend crucially on the (false) assumption that you’ve anticipated all possible contingencies. The problem of what to do when you haven’t anticipated all possible contingencies is *very* hard and is the focus of my current research.
But dynamic consistency isn’t just a matter of reputation, which, as you say doesn’t matter in this context. It’s a desirable property for individual decision-making. If it doesn’t hold, it means that you will predictably reach situations where you would, if you could, change your past decisions. Also, it creates a dependence of current decisions on things that might have happened but didn’t – this gets very complicated.
Maynard Handley 12.09.06 at 6:27 pm
I have heard a claim, which sounds quite reasonable to me, that the equity risk premium is easily understood as a liquidity premium.
The idea is that by holding bonds, when trouble comes, you can sell those bonds for something close to principle, and have the cash you need. (Of course long bonds are subject to interest rate risk, but that is a known issue.)
When it comes to stocks, when trouble comes and you need to raise cash, chances are everyone else is also selling, and you’re liable to take a substantial loss. The difference versus the bonds case is the extent to which your need to raise cash (or at least desire to flee stocks) is correlated with everyone else’s simultaneous desire to flee stocks.
The implication is that this premium is covering a real risk, one that is not diversified by owning the S&P500 rather than a single company, and one that is a real problem *for certain classes of actors* but not for others. If you are an entity that requires a certain level of cash flow over short terms, or at least is vulnerable to this sort of mass panic, you have to figure this risk against the premium and decide how the two compare; if you are not such an entity, you can ride the risk premium to greater wealth.
(Note, as always, this discusses a limited issue in finance. Relevance to a specific situation, eg should I buy stocks today, requires paying attention to other issues, for example the risk that US dollars will be worth a whole lot less ten years from now than they are worth today.)
This strikes me as an eminently reasonable explanation of the risk premium. An ideal solution would involve real numbers (fluctuations in the values of stocks over quarters, the fraction of the market that are sensitive to these fluctuations, etc) but this seems a worthwhile project.
Barry 12.09.06 at 6:47 pm
Also, Maynard, from what little I know, there’s a time-horizon issue. If one can assuredly hold stocks for the long run (i.e., past a recession and it’s sluggish aftermanth), then one is in a different position from those who can’t be sure of that. And the market will, of course, be a mix of both types, plus those who simply don’t see the long run, and make their decisions on the short run.
Barry 12.09.06 at 6:50 pm
John Quiggin: “Marcel, you’re a step ahead here. The dynamic consistency properties of EU depend crucially on the (false) assumption that you’ve anticipated all possible contingencies. The problem of what to do when you haven’t anticipated all possible contingencies is very hard and is the focus of my current research.”
This still doesn’t make it understandable to me. For a simple example, let’s say that there are three possible outcomes – A, B and C, with associated probabilities assumed, all at t0. Then, at time t1, one of the outcomes will be no longer possible (e.g., A), or it actually happens.
How can an expected utility hold then?
John Quiggin 12.09.06 at 8:54 pm
Expected utility is fine in this case, Barry.
More importantly, suppose A happens and that there are three possibilities (themselves involving uncertainty), conditional on A, for time t2, call them Aa, Ab and Ac, such that you can choose, at A, which you want. With dynamic consistency, if you had made a plan at t0 which involved choosing Aa if A occurred at t1, you would always want to follow through. Without dynamic consistency, you might end up preferring Ab at t1, and this might cause you to regret the choices that led you to A.
radek 12.09.06 at 10:29 pm
Barry, that’s the ‘expected’ in expected utility theory
radek 12.09.06 at 10:48 pm
Also, I was under (perhaps mistaken or inaccurate) impression that to the extent that we can estimate eta independently it comes to somewhere around 2 but generally higher than 1 which Stern uses (Brad says sensitivity analysis should use values of 1 to 5)
(eta = 2, gives a simple utility function of u(w)=-(1/w), where w is wealth)
I’m also not convinced that setting pure rate of time preference to 0 is any more “ethical” than setting it equal to 1. Not that I know what the correct delta is in this context. Alan Rogers has an argument that for evolutionary reasons delta is 2.5 (AER 94).
Barry 12.09.06 at 11:48 pm
OK, I think that I see now. Radek, thanks for the reminder – I was thinking about optimal outcomes, not optimal expected values.
Comments on this entry are closed.