I have always been of the view that there’s no real point in getting too outraged about the Nobel Prize for Economics. For one thing – economics is an important subject which is bound to have an important prize, and it’s a good thing that this prize isn’t wholly in the control of the American Economic Association, because if it was it would be a whole lot worse. For another, on an objective look at the quality of the company which the Economics Nobel is keeping, I don’t think anyone can really claim it’s bringing the average down. The Peace Prize is a notorious joke, of course, but the Literature prize is also wildly eccentric, and even the Physics and Chemistry prizes are occasionally awarded to people who believe in ESP. So let’s stipulate that the Balzan Prize and the Fields Medal are both really really good prizes, and that winning one of them is probably even better than having dinner with the King of Sweden.
So, the Fama/Shiller/Hansen prize, or as the vast majority of comment has it, the prize for “Fama, Shiller and that other guy”. What does it say about the state of economics? I think it encapsulates everything good and bad about the subject. First, the good.
People have really misinterpreted what this prize is about – in particular, anyone who thinks there’s anything at all paradoxical about it being shared between Fama and Shiller is really getting the wrong end of the stick. It’s a prize that’s been awarded for decades of empirical work on the statistical properties of securities prices. Allow me a potted history …
Securities markets are big and important economic things, so it would be a good idea to understand as much as we can about them, even if there wasn’t the tantalizing possibility of making a load of money out of predicting their movements, which there is. They also have the attractive property that definitionally, all securities transactions have to be recorded and are denominated in cash, so there aren’t the same measurement issues as there are with a lot of other economic phenomena. We also had (via Paul Samuelson, among others) a very intriguing piece of theory, which suggests that, in the title of his 1965 paper “Properly Anticipated Prices Fluctuate Randomly“. There is a hell of a lot of intellectual debate (and because this is economics, an awful lot of ideological bullshit) related to what conclusions might follow from the fact that market prices properly discount all available information, which I will get onto later. But on the face of it, and bearing in mind that “Properly Anticipated Prices Fluctuate Randomly” does not mean the same as “Randomly Fluctuating Prices Are Properly Anticipated”, the stochastic properties of securities prices, and specifically the question of whether they fluctuate randomly or not, would seem to be quite an important thing to know about.
This is where Eugene Fama came in, round one. Unfortunately, at this point, a piece of misnomenclature occurred, and the “Random Walk Hypothesis” (ie, the hypothesis that securities prices are a random walk process) got renamed the “Efficient Markets Hypothesis”. At least at this point, it was still called a “Hypothesis”, which gave at least a clue that it was an empirical claim about the statistical properties of securities returns rather than a necessary truth about underlying reality – the “Efficient Markets Theory” (and, god help us, “Theorem”) was yet to come.
Although he was one of the parties to the mis-re-naming, Eugene Fama did a lot of really fundamental work in sharpening up the concept of what it might mean for securities prices to fluctuate “randomly” . In particular, his weak, semi-strong and strong forms of the Efficient Markets Hypothesis started the ball rolling with respect to thinking about what sort of information one should be conditioning on, when carrying out the statistical tests for a random walk. On the basis of his own research – which was very good at the time, albeit that as time and science moved on, it became apparent that the standard statistical tests of the day were pretty low in power and tended not to be very good at rejecting the hypothesis of a random walk – Fama concluded that the basic answer to the question was that as far as anyone could tell, and certainly to the extent of being able to profit from them, securities prices were random and anyone charging money for the service of being able to predict them was probably lying. This was described, rather embarrassingly, as “the best established proposition in social sciences”, in 1978 by Michael Jensen, who was one of a large crew of mainly American academics who kind of picked up this ball and ran with it, as we will get into discussing later.
At the start of the 1980s, though, Shiller kind of put a bomb under the EMH, by attacking it from the other side. Although as I noted above, “”Properly Anticipated Prices Fluctuate Randomly” does not imply “Randomly Fluctuating Prices Are Properly Anticipated”, the reverse implication is valid – if prices could be demonstrated to not properly anticipate the changes in the underlying cash flows they represent claims on, then it could be demonstrated that they weren’t wholly random, as well as blowing up the larger intellectual project of “market efficiency” that had been built on the foundations of the Random Walk Hypothesis.
Shiller’s 1981 paper is pretty conceptually simple. Given that securities prices (particularly share prices) are meant to be based on the present value of a stream of future dividends, and given that dividend streams are not really all that volatile, why is it that share prices go up and down so much? Shiller showed that in order to believe that prices properly anticipated the risk-adjusted discounted value of future cash flows, you would have to believe something pretty implausible about the unobserved parameters (the rate of risk aversion and/or time preference) and that it was considerably more appealing to believe that the market simply and constantly got it wrong.
I like to think that, as the majority of CT readers stroke themselves with pleasure at the thought of “Markets are not efficient after all! I was right to do that humanities degree!” and mark down Robert Shiller as A Good Thing to be counterposed to Eugene Fama who was A Bad Thing , a small minority of like-minded souls will be thinking “hang on, tell me about that ‘stock prices not random’ thing?”.
Indeed my friends. If stock prices have “too much” randomness, then they overreact in both directions. Specifically, if the Shiller Cyclically Adjusted Price Earnings ratio (or price-dividends, or whatever) is historically high, then it will tend to fall and vice versa if it is low. This one works. It’s by now a really quite well-established fact about securities prices.
In fairness to Fama, his “round two” contribution to this debate showed that he is a good empiricist at heart, as he spent most of the 80s and early 90s doing work with Kenneth French on testing what kinds of “anomalies” or sources of predictability could be explained away as artifacts of the data or statistical quirks, and which were real and persistent effects that had to be taken into account. In the end, he concluded that before saying that stock prices fluctuated randomly, one had to condition on factors which might include company size, “value” (book to market ratio) and even “momentum” (shares which have performed strongly in recent periods do seem to have a tendency to continue to outperform which can’t fully be chalked up to statistical noise). For reasons I don’t fully understand , Fama still thinks that a theory which allows for these factors is still worth calling a theory of “efficient markets”, but fair do’s to the guy – he won the prize for empirical work and he has not been scared to go where the data took him.
Meanwhile, in another part of the forest, Hansen’s work goes to one of the points that I kind of skated over in the discussion of Shiller above. I noted that, given the variability of dividends and the variability of share prices, one would have to have an absolutely implausible amount of variability in risk aversion to conclude that share prices were “Properly Anticipating” the future. But how can you know what amount of variability is or isn’t implausible? The state of the art in econometrics when Shiller (1981) came out was the “maximum likelihood” method, by which you specify a probability distribution for the error process and then calculate how far out of the tail (or how close to the centre) of this distribution the errors (“residuals”) would have to be in order to give the actual data, if the model you were considering was the correct model.
Which is not very satisfactory, given that we are dealing with securities prices here and that one of the things we have known about securities prices since Mandelbrot (Fama’s dissertation supervisor) is that when you are making assumptions about their probability distribution, you are on really very shaky ground. Which is why Hansen is getting a share of the prize, despite not appearing on television as much as the other two. The General Method of Moments is much weaker in its assumptions than MLE … what you do is …
Rather than maximizing the likelihood function of the residuals, what you do is take advantage of the fact that your model will usually define some function of the residuals of which you want the expectation to be zero. You find out what you have to do to the parameters to make this function equal zero, and you have your estimate, in the simple case, without having to make any of the simplifying assumptions of the linear regression model. In the difficult case (for example with difficult time series structures, where you would want some function of the correlation of the errors to also be zero), there will be more such functions than you have parameters, so you choose the set of parameters which minimize the value of these “identifying restrictions”. Then, Hansen’s methodology tells you how to transform the distance from zero of the minimized functions to get a variable that is chi-square distributed, allowing you to test your model after all .
In the context of securities prices, this allows you to build a full model of investment and consumption behaviour and compare the “excessive” variability found in Shiller to the amount of variation in risk appetite which it might actually be reasonable to presume (given that risk aversion is driven by diminishing marginal utility of consumption, and given the actual variability of consumption). And you find that quite a lot, but not all, of the excess variability could plausibly be chalked up to people correctly anticipating the fluctuations in their real consumption, and being more or less willing to bear investment risk as a result.
And so that’s my tour d’horizon of the empirical securities returns research – it’s basically a quite important and very interesting project which (by the end, in the Hansen estimates of generalized consumption-driven asset pricing models) has taken us to some quite deep and interesting places in understanding fundamental things about risk attitudes and utility. Along the way, a lot of very useful techniques were invented – particularly the General Method of Moments – which have been useful in all sorts of other fields. Stripped of all the ideological bullshit, this is a thoroughly deserved Nobel for all concerned.
But …. well, “stripped of all the ideological bullshit”, the Nuremberg Rallies were a folk festival. This is economics, and the ideology is not something that can be yaddaed away. As I’ve noted on this blog a load of times, there is, hiding somewhere inside the bloated corpus of economics, a nice and intellectually respectable branch of science struggling to get out. It’s just that, somewhere in the nineteenth century, this lean and useful engineering discipline fell into disreputable company, acquired a huge amount of psychological and philosophical baggage, leaving it in very poor shape to resist the further intellectual depredations of the Cold War. So yeah, “Efficient Markets Theory” … it’s bad.
We’ve discussed its Zombie qualities on a number of occasions (and in the relevant chapter of John’s book), but my favourite trip round this particular mulberry bush was in 2004, where John set out the numerous and important policy debates in which fairly massive and substantial conclusions were deduced from a hypothesis (and after 1981, a largely falsified hypothesis) about the stochastic properties of prices on the New York Stock Exchange. I said right at the top of the piece that “Properly Anticipated Prices Fluctuate Randomly” can’t be taken to mean “Prices Which Fluctuate Randomly Are Properly Anticipated”, but actually where this horse and cart was driven to, it looked more like “Because Some Prices Seem To Fluctuate Randomly, No Non-Market-Based Policy Can Be Optimal”. And even beyond that, to various theories of the confidence fairy under which the main job of government is to act as an investor relations officer for the local treasury bonds, which aren’t even consistent with the original thesis about securities predictability. The disastrous metastasis of “random walk” into “efficient markets” is a perfect example of how difficult it is to take the ideology out of economics.
So should we get rid of the prize for the economists “until they can show they are a proper science”? Well, I don’t think so. If you’re going to have economics at all, you’re going to have politicized economics; that’s the nature of the beast. All the sciences have disagreements, and (ugly little secret) in all the sciences, those disagreements are resolved by social processes and things which very much resemble politics of some sort or other – this is a difference of degree, not of kind. Economics has the problem to a much greater degree than other sciences, for the fairly obvious reason that it’s the branch of science which has most to do with the distribution of finite resources in society, so an incorrect view in economics can still be worth pushing in a way in which an incorrect theory of fundamental physics can’t. But this problem isn’t fundamentally one of economics as a science – it’s due to the fact that economics is carried out in the context of society. And really, if someone makes the argument that “third world countries ought to deregulate their capital accounts because most American mutual funds don’t justify their fees”, and society believes them, surely whose fault is that?
 Of course the idea would be for the premier global prize in economics to be awarded by the Econometric Society, not least because John is a member of it.
 Brian Josephson (Physics, 1973) and Kary Mullis (Chemistry, 1993)
 He gives academics the popular culture recognition that they’ve been need’n
 Although not necessarily recorded in a consistent, comparable or machine-readable way. One of the big unsung achievements of empirical finance has been the creation of consolidated and “clean” securities returns datasets.
 Log returns, yes yes. Give me a break here, I’m writing for a general audience.
 Where this solecism is used in print, it appears to mean Samuelson’s result about 40% of the time, the Modigliani-Miller theorem about 30%, some version of the Markowitz portfolio theorem 20% and that the author does not have a clue what he is talking about roughly 100% of the time.
 As Samuelson’s concluding paragraphs have it … “I have not here discussed where the basic probability distributions are supposed to come from. In whose minds are they ex ante? Is there any ex post validation of them? Are they supposed to belong to the market as a whole? And what does that mean? Are they supposed to belong to the “representative individual”, and who is he? Are they some defensible or necessitous compromise of divergent expectation patterns? Do price quotations somehow produce a Pareto-optimal configuration of ex ante subjective probabilities? This paper has not attempted to pronounce on these interesting questions”.
 Nearly all this work was done by economists and econometricians, by the way. One area in which economics really can hold its head up high with any “hard” science you care to name is in the invention of useful pieces of statistical toolkit to deal with random variables.
 It’s also technically a beauty, although I’m oversimplifying it mightily here. Wait until I get onto Hansen, then the real intellectual vandalism will start.
 Kidding! Love you guys really.
 Shiller, by the way, is a guy who believes not only that there should be derivatives markets in absolutely everything from GDP to unemployment to average wages, but that everyone, in the sense of the normal middle class, should actively trade on these markets, in the belief that this would allow people to “hedge” their macroeconomic risk exposures. Personally I think this is a world-beater of a Terrible Idea (and take comfort in the fact that no such contract has ever even looked like taking off, with the possible and qualified exception of the Case-Shiller housing index futures). In the context of Shiller’s results on stock market volatility, it really looks to me like a case of “This food is terrible! And everyone needs to eat it in larger portions!”.
 On the other hand, I am not going to cosign stuff like this, of course, but I think this has to be counted as personal and ideological peccadilloes which shouldn’t be taken as detracting from the underlying quality of the work. As I discuss later in this essay, there are admittedly a hell of a lot of such peccadilloes to yaddayadda around, but hey – Kary Mullis had glowing raccoons from outer space.
 For the longest time, the view from Chicago was that the size, value and momentum effects might be proxies for as yet unknown sources of risk, for which the associated excess returns were fair compensation. This was never disproved, but as far as I can tell the Chicago guys just got tired of getting laughed at and kind of gave up on this theory. Nowadays as I understand it, the view when pressed is that the “anomalies” are genuine features of the dataset which have to be taken into accounting when testing theories on historical data, but not really part of the underlying model, and maybe they will disappear when we have ten thousand years of equity returns to deal with. At least it’s an ethos.
 So hang on, does this mean that you can beat the market? Basically yup. An awful lot of people writing on this Nobel appear to have learned the 1970s version of Efficient Markets when they were at university or at business school and never kept up with the rest of the literature. If you hold a value, size and momentum-loaded portfolio (and load up on a couple of other factors outside the Fama-French ones, most particularly on stocks with low volatility), and if you rebalance it between stocks and cash when the Shiller dividend and earnings ratios are a long way from long term values, then the current state of science is that you will have constructed a portfolio which on the basis of historical evidence is likely to outperform. And all you need is good enough data to calculate these ratios, trading costs low enough to execute the strategy and enough self-discipline and strength of will to stick to the plan when it looks like it is going wrong (as Shiller’s results demonstrate it will, a lot of the time). Easier to say than do, but there are some people who manage it.
 Does not constitute investment advice, not least because it is couched in such absurdly general terms as to hardly constitute advice at all.
 I once gave advice on a mailing list that
“As far as active investment goes, I always put it this way – are you prepared to put as much time and effort into managing your investments as you would into running a small business? If you are then go for it – playing the market is not a bad hobby, about as interesting as birdwatching or something. And most people on this list actually do have enough intelligence to beat the market and therefore to beat most active-managed funds, in my opinion. The trouble is of course that beating the market doesn’t just require intelligence, it requires self-discipline, hard work and the ability to control your emotions. But in many ways so does success in bird-watching.”
 Not in any regulatory or fiduciary sense.
 Plenty of fast skating over technical material there, but frankly, as a description of the method of maximum likelihood which can be squeezed into two tweets, I think that is pretty bloody good actually.
 Not so good, that one. Five and a half tweets and considerably less clear. Any suggestions for improvement gratefully received. On Twitter itself, I managed
“The expectation of model errors ought to be 0. Their correlation ought to be 0. The extent to which it isn’t is a measure of model fit”
Which is pretty barbaric (quite apart from anything, the residual covariance matrix gives you a measure of significance, not fit), but kind of gives the flavour of it in 140 characters