Sooooooo, this is a thing that happened …
And the moral is that if you’re in a mood where you’re likely to insult your favourite authors on Twitter, don’t count on them not finding out about it in this modern and interconnected world. I was clearly in an unusually difficult mood that day, as I also managed to piss of Steve Randy Waldman by describing his latest thesis on macroresilience as “occasionally letting a city block burn down in order to clear out the undergrowth”. As with the Taleb quip, I kind of stand behind the underlying point, but almost certainly wouldn’t have said the same thing to the guy’s face, so sorry Steve. In any case, by way of penance I will now write a few things about resilience and unpredictability. Starting with the point that I found “incredibly insightful” in the Taleb extract most recently posted.
The point I really liked was on p454 of the technical appendix (p8 of the .pdf file), which is something I ought to have realised myself during a couple of debates a few years ago about exactly what went wrong with the CDO structure. Translating from the mathematical language, I would characterise Taleb’s point as being that the problem with “fat tails” is not that they’re fat; it’s that they’re tails. Even when you’re dealing with a perfectly Gaussian or normal distribution, it’s difficult to say anything with any real confidence about the tails of the distribution, because you have much less data about the shape of the tails (because they’re tails) than about the centre and the region around the mean. So you end up estimating the parameters of your favourite probability distribution based on the mean (central tendency) and variance (spread) of the data you have, and hoping that the tails are going to be roughly in the right place.
But any little errors you make in estimation of the central tendency are going to get blown up to a significant extent when you start trying to use your estimate to try to say something about the 99th percentile of the same distribution. Which is kind of a problem since we have a whole financial regulatory infrastructure built up on “value at risk”, which is a term effectively meaning “trying to say something about the 99th percentile of an empirically estimated distribution.”
The deep point I see here is that it’s not worth getting worked up about “fat tails” specifically, or holding out much hope of being able to model financial (and other risks) better by changing one’s distribution assumptions. A little bit of model uncertainty in a normal context will do all the same damage as a massively fat-tailed underlying distribution. And the thing about model uncertainty is that it’s even more toxic to the estimation of correlations and joint probability distributions than it is to the higher percentiles of a single distribution. Even at this late stage, it really isn’t obvious whether the large movements in CDO values in 2007-9 were caused by a sudden shift in default correlation[1], a correlation that had been misestimated in the first place, or by an episode of model failure that looked correlated because it was the same model failing in every case.
The basic problem here is that in a wide variety of important cases, you just don’t know what size or shape the space of possible outcomes might have. At the root of it, this is the basis of my disagreement with SRW too – because we have so little reason to be confident at all in our ability to anticipate the kind of shocks that might arrive, I always tend to regard the project of designing a “resilient” financial system that can shrug off the slings and arrows as being more or less a waste of time. So, should we give up on any sort of planning?
Well … I think we can do a little better than that. And I think that the way we do it is by stepping away from stochastic thinking all together, which was the subject of my (and Alex Delaigue’s) argument with Taleb over the wine/cloth model of Ricardian trade. If you’re thinking about resilience as an activity rather than a structure, you don’t necessarily want to explicitly model stochastic factors in a model in which the parameters of interest are endogenous. Not because the world isn’t random, but precisely because it is.
Long time readers will recall that with respect to the “fat tails” debate, I occupy something of an extreme left-wing position. I don’t think that big stock market crashes or bond defaults are “tail events”, because I don’t think there’s any probability distribution for them to be in the tails of [2]. They’re not draws from any underlying distribution[3]. They’re individual historic events which have individual causes and effects. In so far as we understand them at all, then we understand them by their family resemblance to other things that have happened in the past, but this doesn’t at all mean that our way of reasoning about the likely future effects of (say) a combination of rising real estate prices, stagnant real incomes and increasing debt ought to be expressed in the form of a stochastic process.[4]
How are they expressed then? Well, this was the root of (my side of) the debate over the Ricardian wine/cloth model, and in so far as I have one, this is my defence of economic theory. Because in the particular sphere of economics, our valid generalisations about the class of unrepeatable, individual historic events that we have to live through, are expressed as statements of macroeconomic theory. Specifically (and I do not propose to get into a big argument about this in the comments; my only real argument with either New Classicists or Austrian business cycle theorists is guys, you had your chance and you blew it), Keynesian macroeconomic theory.
I think people are underestimating quite how well-tested Keynesian theory is, by now, in the Popperian sense. It not only works, as shown in dozens of recession cases, it’s also seemingly the only thing that works, also demonstrated by many of the same cases. It explains why alternative solutions don’t work and nearly all of its bad effects exist only in hypotheticals and fairy stories. It doesn’t provide particularly accurate point estimates of future states of the economy’s stochastic process, but as suggested above, I don’t think that’s a reasonable thing to ask for. If this was a medicine, you’d have stopped giving placebo to the control group by now on grounds of pure ethics. Certainly, human behavioural and institutional regularities might change, but we can deal with them when they do. For the time being, Keynesian theory (and I emphasise theory here – Simon Wren-Lewis has changed my mind here on the value of empirics without theory, which would have given you completely the wrong steer in terms of guesstimating the fiscal policy multiplier) is a toolkit that works. We should be dealing with the future by predicting it based on the best theory we have, coupled with a sensible range of assumptions for the unpredictable course of events, and making policy in response to what we think is going to happen. Sometimes we will be wrong, sometimes we will be disastrously wrong, but I don’t think anyone thought that this was going to be easy. Resilience and robustness in economic arrangements aren’t basically an engineering concept – you guess the range of storms that might hit your levee, then build it for twice or three times the worst case. They’re much more like steering a ship into a difficult harbour with moving sandbanks; you react to conditions as they occur, based on training, understanding and common sense.
Which ends up being my reply to SRW too. We know how to build “macroresilience”. The combination of an activist central bank and an activist fiscal policy is really very macroresilient. The housing bubble could and should have been popped; the internet bubble burst without any real bad effects. Correct policy works. There’s no need to fiddle around trying to massively overhaul all sorts of economic institutions; we just need to work properly with the ones we have. And conversely, in an environment where the fiscal policy apparatus is in the hands of a wildly dysfunctional and sclerotic (USA) or fundamentally ideologically misguided (UK) political system, there’s not much else that can be done without fixing the big problem first.
[1] the concept of a “shift” in a parameter like correlation is on really quite dodgy epistemological ground and Taleb gives it another deserved kicking in the excerpt.
[2] Personal hobby horse alert – economists, decision theorists and moral philosophers are always helping themselves to probability distributions where there’s no very obvious reason to presume that they exist at all, let alone exist in the well-behaved and stable form that would be required for the “expected utility” of one or another course of action to exist. There’s just no reason to believe a priori that the set of possible outcomes can be put in one-to-one correspondence with the real numbers.
[3] Not even some mega-mega-mega hierarchical True Probability Distribution known only to God and Victor Niederhoffer. My slogan has always been that the Great Depression and the Russian Revolution did actually happen – they weren’t just particularly inaccurate observations of the true underlying economic growth rate.
[4] This is the difference between my background in financial markets and NNT’s. I don’t come from a derivatives background – I made my career in cash equities research. That’s a part of the industry which requires surprisingly little in the way of advanced mathematics, but a whole load of institutional detail and accumulated newsflow, a certain streetwise instinct for the way the world works, and a constant willingness to accept that if you’re only wrong half the time, you’re doing well[5]. We deal in personalities, educated guesses and rules of thumb. Scientists tend to hate us, and genuinely rigorous financial economists often spend a surprising amount of their time proving to themselves that we add no value and are just meaningless journalists[7]. On the other hand, cash equities brokerage is one of the only business lines in investment banking where people regularly set up in business using their own money.
[5] Because you put more money on when you’re right and take it off the table when it appears that, for whatever reason, you’re wrong.[6]
[6] Or some synonym for “wrong” such as “early”, “seeing something the market doesn’t”, “contrarian”, “concentrating on fundamentals”, “aware of the bigger picture”, etc. Personally I’m never wrong. Sometimes a bit early, sometimes really quite a lot early but never wrong.
[7] Current state of science, by the way, suggests that good analysts do beat the market, on average, by a bit, and that one period’s performance is systematically maintained in future periods. But that all of the returns to this ability tend to accrue to the people who have it rather than to people who passively hand over their cash, don’t monitor it and expect to be handed an above-market return, preferably on a silver platter. This fact surprises and appals economists more than you’d think it might.
{ 60 comments }
rootless (@root_e) 11.07.12 at 10:46 pm
Past performance is not necessarily indicative of future returns,
Isn’t the bigger problem that one is looking at a slice of historical data and then extrapolating from that under the assumption that the data is not a historical slice but a representative sample. For example, I collect daily temperature fluctuations in Minnesota in July and produce a model to predict temperature change – not realizing that there are seasons. How do you know you have a sufficient number of years and not just a month? And if you have yearly data, how do you know there are not secular events that will change the distribution even in that sample?
Another excellent example of this problem can be found in the risk assessments for nuclear power plants where 30 years of data in the interval between 800 year tidal waves does not give you a good statistical basis.
MattF 11.07.12 at 11:11 pm
You can do probability without a model, but I don’t think you can do statistics without a model. This distinction applies specifically in the ‘tail’ of a distribution: how do you distinguish between events that are ‘low-probability’ and events that are outside the model (a.k.a. ‘outliers’)? Well, without a model, you can’t.
Daniel 11.07.12 at 11:12 pm
For example, I collect daily temperature fluctuations in Minnesota in July and produce a model to predict temperature change – not realizing that there are seasons. How do you know you have a sufficient number of years and not just a month?
Looking at a calendar often suffices ;-)
This is what I mean by needing theory as well as empirics. But at the end of the day, it is not really moving the ball down the field all that much to scratch one’s chin and note that a levee built to withstand a 100 year flood is going to be kind of fucked in the next 200 year flood. At some point, as Wittgenstein said, explanations have to come to an end.
BenP 11.07.12 at 11:13 pm
“I think people are underestimating quite how well-tested Keynesian theory is, by now, in the Popperian sense. It not only works, as shown in dozens of recession cases, it’s also seemingly the only thing that works, also demonstrated by many of the same cases”
What about 1970’s stagflation, didn’t this put Keynesianism to the test, which it failed?
david 11.07.12 at 11:18 pm
I think Daniel means New Keynesianism?
Dr. Hilarius 11.07.12 at 11:19 pm
The problem with controlled burns, in addition to the immediate damage, is that sometimes they can’t be controlled. To extend the analogy, consider someone storing volatile materials in their basement. If a fire takes place it may well take out surrounding structures. That’s why there are regulations about how much volatile stuff you can store. Isn’t preventing the risk of fire the equivalent of clearing out the underbrush?
Phil 11.07.12 at 11:22 pm
economists, decision theorists and moral philosophers are always helping themselves to probability distributions where there’s no very obvious reason to presume that they exist at all
Sigh… it’s a bit reminiscent of webbies’ old habit of helping themselves to power laws on equally slim evidence (on which see Tom Slee, among others), although a bit more understandable – at least you can do something useful with a normal distribution.
rootless (@root_e) 11.07.12 at 11:23 pm
On the other hand, it is important to distinguish between a real probability and more dubious one. My decision on getting a flu shot or the reliability of an automobile brake involves some uncertainty but can be based on actual probabilities with controlled and mostly known variables. My decision on the safety of a nuclear power plant or the risks of banking institution might use a similar appearing measure, but relying on that measure involves a larger leap of faith.
david 11.07.12 at 11:23 pm
Anyway, a couple of remarks:
Activist New Keynesian monetary policy is indeed effective; it however has a “and then a miracle occurs” bit in which we assume that we successfully raise inflation expectations. We don’t actually know what definitively raises inflation expectations. If we rely on the traditional levers – overnight interbank rates and short-term targets – then we have to have a functioning financial system amidst possible crisis, which entails a Greenspan put. And that does spiral into increasing and increasing amounts of bets on extreme outcomes.
Small open economies – where a lot of econometric data is generated – are optimistic about it and generate good data supporting it. Of course they do! It is far easier to pull the exchange rate to and fro for them. Just have the Treasury buy up or sell tons of forex. They actually do helicopter dropping. The big nations can’t do this, however. It is those institutional factors that make their recessions seem sui generis.
david 11.07.12 at 11:29 pm
Re: fn 7, this is a little hedgily stated. It is true that even the likes of Fama acknowledge this; however, it is usually stated in the form of the weak EMH being strictly false but not false enough to be larger than the stockbroker’s commission. The implication being, of course, that if you were the stockbroker picking your own funds, you’d beat the market, at the expense of your own effort. Your excess market return would just be your wage income, though.
So the present state of the science does place limits on how false the EMH can be.
Daniel 11.07.12 at 11:36 pm
What about 1970′s stagflation, didn’t this put Keynesianism to the test, which it failed?
No, it passed.
So the present state of the science does place limits on how false the EMH can be.
Since the weak and semi-strong forms of EMH have all been empirically falsified, this is a case in which IMO the theory isn’t helping any more.
bob mcmanus 11.07.12 at 11:44 pm
What about 1970′s stagflation, didn’t this put Keynesianism to the test, which it failed?
I think so, just as I think Keynesianism, both New and Krugman’s simplified ISLM, are failing and have failed this time.
Because it was forgotten or ignored.
A good economic theory or system should not consider the politics and acceptance of it as something exogenous to itself. It should “sell itself”, or explain why it doesn’t sell at a specific time to a given audience, and contain methods and tools to overcome resistance or mitigate against fools.
Kalecki, Minsky and Marx made sure to include politics in their political economy.
And no matter how early Krugman understood the zero-bound, the economies and people suffered anyway. His paper on Japan was out there, let alone fifty years of post-Keynesianism. It was ignored. That is his failure, as much as a failure of the politicians.
david 11.07.12 at 11:46 pm
I wouldn’t describe the present state of the EMH literature like that – the weak and semi-strong forms have a lot of evidence for the EMH in the “below the trading fees” sense.
Rather what has happened is that the literature spun off into what Larry Summers called observing that a two-quart bottle of ketchup costs twice as much as a one-quart bottle and using that as the definition of efficiency. Relative prices of liquid assets are indeed efficiently priced, but there’s very little miraculous revelation going on here.
This distinction matters for Keynesianism: it means that we can use large macroscopic levers and let the market sort out the structural adjustment. Macroscopic failures can happen, and we don’t have to worry that much about microeconomic issues: the market will sort it out.
leederick 11.08.12 at 12:03 am
“Even when you’re dealing with a perfectly Gaussian or normal distribution, it’s difficult to say anything with any real confidence about the tails of the distribution… So you end up estimating the parameters of your favourite probability distribution based on the mean (central tendency) and variance (spread) of the data you have, and hoping that the tails are going to be roughly in the right place.”
No, the mean and variance are sufficient statistics for a normal distribution.
david 11.08.12 at 12:05 am
Hmm. Leederick is right. The actual problem is confirming whether you are indeed working with data generated from a normal distribution. But if you are already certain, then mean and variance are sufficient to characterize the tails.
bianca steele 11.08.12 at 12:30 am
Nice to see Stephen Jay Gould’s honor has been unbesmirched once more. Though isn’t biology due a second apostle to the humanists by now?
bob mcmanus 11.08.12 at 12:32 am
And conversely, in an environment where the fiscal policy apparatus is in the hands of a wildly dysfunctional and sclerotic (USA) or fundamentally ideologically misguided (UK) political system, there’s not much else that can be done without fixing the big problem first.
This, as I understand Keynes after Economic Consequences of the Peace, gets it exactly backwards. He wanted to create a economic science that prevented and forestalled catastrophic political dysfunction.
I thought that was the goal of all macroeconomists.
I like SRW. I think he is flailing around some, but I think he is addressing the underlying problem of “why austerity and how do we make sure it doesn’t happen again” in ways most New Keynesians are failing to do.
This post seems to approach some kind of Paul Davidson non-ergodicity, but then backs away from the stabilization tools that the Post-Keynesians think are necessary.
sometimes we will be disastrously wrong
The collateral damage is unnecessary and unacceptable.
thomas 11.08.12 at 12:35 am
@leederick, @david:
Yes, the mean and variance are sufficient statistics (in the iid case), but the predictive distribution of the next observation (in the iid case) is still a heavy-tailed distribution: a t distribution. If the sample size is small, this t distribution is quite heavy tailed.
[In the non-iid case, firstly, the effective sample size can be quite small even if the number of observations is large, and secondly, you need to estimate the whole mean and covariance structure, so everything is the same only incomparably worse. Also ‘multivariate normal’ is a much stronger assumption than marginally normal]
Barry 11.08.12 at 12:48 am
MattF 11.07.12 at 11:11 pm
” You can do probability without a model, but I don’t think you can do statistics without a model. This distinction applies specifically in the ‘tail’ of a distribution: how do you distinguish between events that are ‘low-probability’ and events that are outside the model (a.k.a. ‘outliers’)? Well, without a model, you can’t.”
Try it. The closest I can see is using the empirical distribution function from the data, which (a) interpolates between points in some way (i.e., a model) and (b) assumes that the past can be used to predict the future (which really is an incredibly strong assumption).
Barry 11.08.12 at 12:50 am
Daniel: “Even when you’re dealing with a perfectly Gaussian or normal distribution, it’s difficult to say anything with any real confidence about the tails of the distribution… So you end up estimating the parameters of your favourite probability distribution based on the mean (central tendency) and variance (spread) of the data you have, and hoping that the tails are going to be roughly in the right place.â€
leederick : No, the mean and variance are sufficient statistics for a normal distribution.
Run a normal plot with confidence intervals, and watch what they do at the first and 99th percentiles. ‘Sufficient’ in a statistical sense is not always ‘sufficient’ in a practical sense.
Barry 11.08.12 at 12:53 am
bob mcmanus:
“This, as I understand Keynes after Economic Consequences of the Peace, gets it exactly backwards. He wanted to create a economic science that prevented and forestalled catastrophic political dysfunction. ”
He got it rather strikingly right – the Great Depression put the Nazis in power. They were the sort of nasty scum who frequently flourish when things are trashed.
It’s a two-way street – economic dysfunction aids extremists and political dysfunction, and the latter aids economic dysfunction.
david 11.08.12 at 12:55 am
@thomas
In that case we no longer have a perfectly normal distribution, as Daniel specified. But even if we are obliged to pick our mean and variance from a small sample, we can still put tight limits on the resulting t-distribution. Its tails are indeed a little heavier, but not so heavy that we can’t tightly characterize it. When we speak of “heavy tails” in the Taleb sense, we are referring to distributions whose tails are so wild that we have undefined moments and failures of the weak law of large numbers.
Eli Rabett 11.08.12 at 1:05 am
This is not a new problem. There are reams of books on the problems with estimating the tails in atomic and molecular absorption lines and how that affects such issues as the temperature in Minnesota (no joke, the effect of the tails is significant because they are not saturated. For giggles, google – water vapor continuum problem. Same thing, different place, and no, if the continuum goes out to infinity you are in deep trouble.
William Timberman 11.08.12 at 1:27 am
bob mcmanus @ 17
I’m grateful that Krugman, DeLong, Thoma, Quiggin, d-squared, et al. understand the principles of a macroeconomic policy which actually works, and I’m also grateful to them for being so graciously willing to teach those principles to the hordes of us who don’t already understand them.
Still, your observation reminds me why I develop a rash every time Brad DeLong gets out his banjo and favors us with a selection from his Marx-was-wrong golden oldies songbook. That Marx was wrong about what he was wrong about seems more or less beyond dispute, but I do hate to see everyone — particularly economists — consistently dismiss or trivialize what he was right about as though there’s some superior wisdom available to replace it. Politics is indeed where the rubber meets the road, and any economic policy which treats the welfare of a majority of the world’s inhabitants as some kind of unavoidable collateral damage is bound to concede its supremacy eventually. One hopes that they don’t wait till they’re being dragged through the streets in a tumbril to do so.
William Timberman 11.08.12 at 1:52 am
Grammar failure. How about: …those responsible for any economic policy which treats the welfare of a majority of the world’s inhabitants as some kind of unavoidable collateral damage are bound to concede its supremacy eventually.
I should probably also add that I don’t see revolution, with or without tumbrils, as the most likely outcome of the present tendency toward pig-headed stupidity and sociopathy of international capitalism and its servitors. There are all sorts of other nasty outcomes possible, but I just can’t see see how we can be certain that we’ll avoid any of them, let alone all of them, unless we manage to come up with something a little more effective than lecturing the custodians of the status quo.
nnyhav 11.08.12 at 2:37 am
David Warsh on Gary Gorton (Oct 7):
http://www.economicprincipals.com/issues/2012.10.07/1424.html
“It turns out the villain in the DSGE approach is the S term, for stochastic processes, meaning a view of the economy as probabilistic system that may change one way or another, depending on what happens to it, as opposed to a deterministic one, in which everything works in a certain way, as, for instance, the transmission of a car. That much certainly makes sense to most people. It is only when economists begin to speak of shocks that matters become hazy. Shocks of various sorts have been familiar to economists ever since the 1930s, when the Ukrainian statistician Eugen Slutsky introduced the idea of sudden and unexpected concatenations of random events as perhaps a better way of thinking about the sources of business cycles than the prevailing view of too-good-a-time-at-the punch-bowl as the underlying mechanism.
“But it was only after 1983, when Edward Prescott and Finn Kydland introduced a stylized model with which shocks of various sorts might be employed to explain business fluctuations, that the stochastic approach took over macroeconomics. The pair subsequently won a Nobel Prize, for this and other work. (All this is explained with a reasonable degree of clarity in an article the two wrote for the Federal Reserve Bank of Minneapolis in 1990, Business Cycles: Real Facts and a Monetary Myth). Where there had been only supply shocks and demand shocks before, now there were various real shocks, unexpected and unpredictable changes in technologies, say, or preferences for work and leisure, that might explain different economic outcomes that were observed. Before long, there were even “rare economic disaster†shocks that could explain the equity premium and other perennial mysteries.
“That the world economy received a “shock†when US government policy reversed itself in September 2008 and permitted Lehman Brothers to fail: what kind of an explanation is that? […]
“Only a theory beats another theory, of course. And the theory of financial crises has a long, long way to go before it is expressed in carefully-reasoned models and mapped into the rest of what we think we know about the behavior of the world economy. ”
CDOs and VaR, whole nother thang. Maybe later.
Watson Ladd 11.08.12 at 3:44 am
My impression is that Bayesian models could tell you not only what the model estimated as the probabilities of events, but how those probabilities would change with new evidence. So when they are used for searching for valuable items lost at sea you get predictions such as “we will find the item in five days 90% of the time. If not the search will take ten additional days 90% of the time” etc. The rare we don’t find it at all is completely accounted for, and you can also estimate sensitivity to model parameters. So I’m not sure what shortcoming is being argued about.
As for your argument on history, no Keynesian has an answer for why wages have stagnated for the 30 years after we were all Keynesians. History means politics, means class struggle, and from that we have Marxism.
Daniel 11.08.12 at 6:27 am
I wouldn’t describe the present state of the EMH literature like that – the weak and semi-strong forms have a lot of evidence for the EMH in the “below the trading fees†sense.
Seriously, this view is ten years out of date in terms of the econometric literature (or even more – Lo’s “A Non-Random Walk Down Wall Street” was published in the mid 90s).
No, the mean and variance are sufficient statistics for a normal distribution.
So all you need is a perfectly accurate estimate of the mean and variance and you are all set. How do you get one of those? The irrelevance of this point is explained clearly in the Taleb book at the page cited.
david 11.08.12 at 8:10 am
So don’t write “even when you’re dealing with a perfectly Gaussian or normal distribution”, if your contention is that you can’t know the distribution beforehand.
As for the EMH, I am aware of the finance literature, thank you, and not just the literature from pop-econ books. The straightforward statement “deviations from the weak form exist and are statistically significant, but they are not very large” holds. Lo’s model walks right into this hole: it is hardly impressive to identify small systematic behaviors when Fama conceded in 1974 that such small predictable patterns cannot be arbitraged away and would probably exist. Trivial calendar anomalies and all are already statistically significant, never mind some hypothetical pattern dug up through overfitting.
“But the weak EMH isn’t strictly true!” – so what? It’s true enough for our purposes, namely to assert that financial assets can be treated as homogenously maximizing “the” market NPV. This wasn’t taken as true five decades ago, which is why the post-Keynesians, which split off then, still hyperventilate over capital aggregation, price stability, and gross substitution. We suppress all that by invoking arbitrage and the weak EMH. We assume it is sensible to talk about “the” interest rate, “the” rental rate on capital, “the” tradeoff between liquidity, volatility, and returns. In endorsing modern Keynesian outlooks you are already buying into the bulk of what the EMH implies about reality. The canonical three-equation New Keynesian model freely builds it in.
The real literature action nowadays is in the market model being appended to the dual hypothesis, since we all want to get away from the ketchup problem. But that has increasingly little to do with Fama’s weak/semi-strong/strong formulation from the 70s.
Walt 11.08.12 at 8:41 am
david, your description of the literature is really very wrong. There aren’t gigantic anomalies that persist forever — traders are not actually stupid, and can read academic papers — but there are persistent anomalies that don’t have clear explanations. The way current empirical research in equity prices works is to simply take the known anomalies as given (the Fama-French 4 factor model is the typical method), and then see if there’s anything puzzling over and above that.
Also, “the” market NPV requires complete markets for all current and future contingencies. When did the final market necessary for completeness open? I missed the announcement.
Your comment about the post-Keynesians misses their point, which has nothing to do with arbitrage or the EMH. Macroeconomists build their models to have certain properties that are unlikely to hold in the real world. They use a homogenous capital good, which implies a bunch of facts that are not general properties of GE models. This is clearly explained in Mas Colell, Whinston, and Green in their chapter on infinite-horizon models.
david 11.08.12 at 9:31 am
No: the Fama-French multi-factor model is a market model. It is entirely compatible with the EMH; the ‘anomalies’ there are efficient if they are, as Fama and French argued, proxies for risk.
I’m sure you’re aware that ‘anomalies’ in the sense of asset pricing market models refer to deviations from the CAPM, and ‘anomalies’ in the sense of the EMH refers instead to apparently arbitrage-able pricing inefficiencies. These are different branches of the financial literature. The CAPM is way stronger than plausible, so ‘anomalies’ from it are only to be expected.
There is no need for complete Arrow-security contingent-over-all-states-of-the-world markets to speak of a unified market interest rate and time discount – there only needs to be arbitrage between existing markets.
The old Keynesians, and the post Keynesians today, were actively skeptical about financial market pricing. Think Chapter 12 beauty contests. It is a sign of how much the EMH has already pervaded the zeitgeist that we think of the market as exhibiting anomalies ‘from’ the EMH than of how bizarre the EMH is to begin with. If you think this was an insignificant debate, recall the titanic 70s fight over the prospect of exchange rate floating and what effects macroeconomists predicted it to have. Today currency crisis models all assume forward-looking rational etc. investors.
dsquared 11.08.12 at 9:31 am
I certainly should have said what I said. Even if you know that a series is generated by a machine that can only produce Gaussian distributions (like a bagatelle table), if there is any instability or uncertainty attached to the parameters of that machine, the series is going to behave as if it has fat tails. This is explained clearly in the Taleb extract and as Eli says it is a well known issue in natural sciences.
david 11.08.12 at 9:42 am
Um… then the series is no longer generated by a normal? Which is defined with its constant mean and constant variance?
Look, it’s really easy to throw normal processes together and get a heavy-tailed distribution out. The infamous Cauchy is just the ratio of two IID normals, and it doesn’t even have a mean, just a central tendency. But I don’t go around saying that when working with Cauchy, I am working with “perfectly Gaussian or normal distribution[s]”. Because it isn’t in fact normally distributed, that’s the point.
Hawkeye 11.08.12 at 9:46 am
Daniel: “the internet bubble burst without any real bad effects.”
This BoE paper from last year suggests that this was owing equity destruction where losses were borne by those most able to “suck it up”:
“there was no contagion from financial distress at individual firms to broader financial system distress. This is probably due to the manner in which the prior expansion of balance sheets had been financed. The losses from the bursting of the tech bubble were borne primarily by equity, and to a lesser extent, bond-holders — ie pension funds and insurance companies and wealthy individual investors. Because banks had played a peripheral role in financing the expansion, their balance sheets were left largely unscathed by the fallout, which meant credit supply was largely unaffected.” [p17]
http://www.bankofengland.co.uk/publications/Documents/fsr/fs_paper10.pdf
But bursting the housing bubble probably would have meant debt destruction (i.e. bank credit). All policy initiatives since 1980s seem to be to protect lending (credit expansion) at all costs; even if this means Austerity, Inflation, or other methods for socialising / postponing losses.
Do we just need to implement a wholesale Debt for Equity swap to ensure macroresilience?
david 11.08.12 at 10:05 am
I get the point you were trying to allude to – it’s really hard to identify and parametrize a distribution merely by examining the data, particularly if you are most interested in the behavior of the extreme outcomes. This is indeed well-known. But it is a problem that exists when you know that you are ignorant of the generating process.
If you have strong theoretical reasons to suspect a particular distribution, then it is generally easy to estimate its parameters, and one’s uncertainty over the tails is tightly limited. Suppose it is perfectly normal; then what we get via sample estimates is a t distribution. A t is heavy-tailed but not uncontrollably so: the information contained in the central areas does provide enough data to constrain the behavior of the tails, so in fact working with a perfect Gaussian with a mean/variance that is thought to be constant, but whose value is unknown and only estimated doesn’t stray in Taleb-land.
As is usual for Taleb, his point is valid but encourages a certain carelessness (e.g., remember that the argument about fitting heavy tails applies just as well in converse, i.e., in arguing that there is an empirical heavy tail to begin with. This sinks a lot of power-law claims, for starters).
dsquared 11.08.12 at 10:25 am
I don’t think anyone is confused about the actual issue here so will sign out of this terminological quibble by saying that if it’s really true that when you are modelling the ratio of two normally distributed series, you go around saying “oh yes, I’m working with the Cauchy distribution you probably haven’t heard of it”, then we probably shouldn’t do lunch. #quanthipsters
david 11.08.12 at 10:27 am
I’m now sad that’s not a real hashtag ;)
Trader Joe 11.08.12 at 12:34 pm
Fat tails and market crashes
The theories of distributions and tails are interesting white paper fodder but they fail to deal with the fact that an increasing portion of the market is traded by machines that don’t have any sense of what they are looking at but constantly recalibrate to the data they are fed….when markets start to avalanche the machines move the mid-point of the distribution in the direction the market is moving effectively fattening the tail relative to the prior mid-point.
Since humans can’t work faster than the machines, human traders exacerbate the shift by anticipation….which is to say, they ‘sell ahead’ of shifts (or perceived shifts) thus creating the shifts that the machines accelerate…this continues until values get so out of whack that natural buyers come in to absorb the excess supply.
The increased volume of program driven “black box” trading creates increased volatility as a natural by-product….hence: more black swans.
…to inject the theoretical, if a market had 99 machines and 1 real human making trades – the human would always beat the market because he could instigate all of the volatility.
2 cents from the real world.
P.S. 99% of the real humans in the trading turrets couldn’t spell Gaussian if you spotted them the consonants….its the boys up in the algo-programming rules that eat that stuff like worms eat dirt.
Bunbury 11.08.12 at 12:51 pm
Do we really have a sufficient tool for macroresilience? Didn’t we have to learn a painful lesson about the limits of such activism in the 70s? Isn’t the repeated application of central bank and fiscal activism what brought us to the zero lower bound we are at now and which despite the existence of potential Cassandras like Krugman we have little experience of getting out of? (For an example of the depth of problems, in the UK for the first time in a generation pension funds have more investment in bonds than in equity so any quick exit from the low interest rate environment will exact a price from institutions that historically have not been expected to pay. )
Which is to say that while economists certainly grow too attached to their favourite mathematical tools they are neither a necessary nor a sufficient condition for epistemic hubris. And while I think the point about tails being tails is good and well worth making (cf. the difference between 95% daily VaR as originally adopted by JPMorgan and the 99.5% annual VaR supposed to govern the solvency of insurance companies under Solvency II – the deliberate plan to bring Basle II to insurance regulation) I’m not sure that model and estimation uncertainty are really the best place to look for explanations of the CDO crisis.
Bunbury 11.08.12 at 1:27 pm
@WatsonLadd Bayesian modelling does at least admit the uncertainty of parameter estimation but in practice learning is slow and incremental, there is a tendency to use over-simple models and prior expectations to fit the modelling task into the available modelling effort and since the model is usually also the eye through which reality is perceived the interpretation can be that which is least surprising from the model’s point of view.
@TraderJoe Programmatic trading certainly played a role in Black Wednesday and the Flash Crash but, excusing the crisis for a moment, aren’t equity volatilities actually going down?
Trader Joe 11.08.12 at 1:44 pm
Bunbury
At this moment, equity volatilites are at the bottom of a 5 year range…but that’s a range that has more than doubled relative to prior 5 year periods. A VIX on the NYSE of 16 used to be viewed as a pretty choppy market, now its viewed as fairly tame. The five year average VIX is now about 22. In the 1990s it was 13.
Bunbury 11.08.12 at 3:12 pm
It was low again in the mid noughties but very high in the dotcom bubble and again since the crisis. Not sure all or even most of the blame can be put on HFT. Put it this way, should the market have been more stable through the Global Financial Crisis?
Bruce Wilder 11.08.12 at 4:06 pm
I went over to Quiggin’s new Mulligan barrel, hoping to shoot fish, and all the fish were already shot. So, I came here. I hope you aren’t too sad, to be my second choice.
The math of statistical probability can be hard to wrap one’s head around, but the nearly imponderable philosophical problems at the core are nearly as hazardous. What does “random” really mean? Is a “random” event, without cause? What does “probable” mean? If Romney, against the odds, had been elected, would we congratulate Nate Silver on his estimate that the probability of Obama winning was ~80%? (On the grounds, that Silver was “predicting” a 20% chance of Romney being elected?)
Before we get too far out there, scratching our heads about model uncertainty and the shape of “tails”, we might want to inquire about why there’s a distribution at all.
Cranky Observer 11.08.12 at 4:21 pm
Bruce Wilder @ 4:06: Also the question of what it means to assign a probability or “odds” to a single event. I once had a long discussion with Robert Weber at Kellogg about this and although I emerged bruised (not surprisingly) it still seems to me to be a very hard question. It was raised by some clearly incompetent people during the 2012 election cycle but that didn’t make it easier to answer.
Cranky
One quick response is to point to bookmakers for an operationalization of the concept, but bookmakers aggregate hundreds of singular events per year and need only make money overall, not on any one event.
Trader Joe 11.08.12 at 5:12 pm
Bunbury
Program trading for equities averaged between 5-10% of total volumes throughout the 90s. Its averaged 35% over the last 5 years and 30% over the course of the ’00s….the correlation is unmistakably high….causality, I agree not 100% when we look at the relative ‘scare’ factor of the Fin Crisis vs. the relative “ripple” of the dot.com bust.
I’d note though that my points also apply to fixed income and related hybred securities. Program trading was non-existent before the early 2000s and now is an increasingly large proportion (about 15%) which is to say nothing of the credit default swap and ETF markets which handle Trillions on a weekly basis.
Again – I can’t make a causality argument – but I’m hard pressed to call it co-incidence either.
Cranky
Bookmakers don’t care about ‘winning’ they care about volumes handle and making a carry on each transaction….its true they can’t always lay off their whole book, but they usually try to flatten it to where losss per occurence are low. Its the volume that makes the equation work – not the batting average.
Tinky 11.08.12 at 6:21 pm
Trader Joe –
“Its the volume that makes the equation work – not the batting average.”
That is true of contemporary bookmakers, though to use one example, the legendary William Hill (whose firm still exists) took very strong positions on individual events, and built his fortune by risking bankruptcy on a number of occasions.
Cranky Observer 11.08.12 at 7:14 pm
I’m aware of that Trader Joe, but not my point. Albert P. has 6919 at-bats, all very similar (within limits of googling & posting by iPhone). What does it mean if two strangers meet and agree to roll a 20-sided die once then walk away and never see each other again that the “odds” of rolling an 11 are 1/20? Obvious to some; equally un-obvious to others. Obama & Romney only competed once and at most 20 semi-comparable events in human history (Adams was an interesting guy but his election of zero relevance to the modern election).
Cranky
Bruce Wilder 11.08.12 at 9:10 pm
Cranky Observer @ 44
Viewed in one way, the actual election was a one-off event, but viewed in other frames, it was composed of a massive number of events: the actual votes.
Silver, whose work I put some effort into trying to understand, derives his odds from a simulation, in which which he “runs” the election many, many times. From my somewhat intuitive understanding of his method, I would venture that his simulations actually understate the probabilities by quite a lot, because they tend to exaggerate some sources of variation, by positing more independence of results between states than really exists. That is, he was guesstimating that Obama’s victory, given the information available, was equivalent to an ~80% probability (for a many repeated process, like his simulation). But, if he had approached his task more analytically, his simulation would have reflected more accurately the tendency of all the states to break together (to be correlated in their election outcomes), and his probability estimate might have been well above 95%. Sam Wang, at Princeton Election Consortium, who is far more analytic in his approach, is a good contrast to Nate Silver, in this regard, and I think he put very high estimates on Obama’s re-election. (He also offers Bayesian v. classical estimates, for people, who really want to step through the Looking Glass.)
Here’s the thing about election polling, which is somewhat like financial markets: in election polling, we have a simple idea, which we use as a heuristic: that is, we can calculate, analytically, the (classic) probabilities for a random sample, trying to estimate a coin-flip, which is close to honest (50-50). You get very tight measurement errors from fairly small samples for something close to 50-50. (As the OP noted, for something nearer the tails, the measurement error would be much higher.)
The practical problem for pollsters is that there is no way to get an actual random sample. So, they do what they can do, and then they stratify the sample, to make it match what they know, or think they know, about the population. A stratified sample isn’t random; but, it can actually have much, much lower sampling error than a random sample, because the act of stratifying will eliminate most of the possibility of getting a sample, which is way off. With a truly random sample, 5% of the time, or 1% of the time, depending on the size of the sample, you will get something really way, way off-base, far from the norm. With a stratified sample, you are deliberately making sure you get the right proportion of men, blacks, Democrats, urban residents, etc. — whatever you are stratifying. And, in that process, you eliminate a lot of measurement error associated with truly random sampling. You know, for example, that blacks are going to heavily vote for Obama, and you will make sure that your sample of blacks conform to that expectation, and in the process, you eliminate a lot of sampling error.
The price of this, is that you accept a bias, and you don’t know, for sure, what your bias is. This is the “house effect”, which Silver talks about. As some partisan pundits pointed out, it is possible that all the polls have the same bias, because they all make some fundamental error in their sampling, which hides a critical part of the electorate. (Say, no one calls cellphones, or a high proportion of Republican voters lie to pollsters, or simply refuse to participate in surveys, but Democrats don’t.) But, if “house effects” are random around the true value (no net bias after aggregating), then aggregating polls gives you very, very large sample size, and due to stratified samples, teeny tiny measurement error.
Bruce Wilder 11.08.12 at 9:23 pm
The analogous point about financial markets, which I don’t have time to develop, here, is that financial markets involve a basic tension, revolving around the imperfect mechanics of their operations.
In some imaginary idealistic world, conceived of analytically, financial markets work perfectly, and something like the Efficient Markets Hypothesis isn’t just an obscure technical point about how to formulate a null hypothesis for scientific research on financial market pricing, it is, as some foolish ideologues assert, an accurate description: a thesis, not a null hypothesis. (The null hypothesis, as you may recall, is the hypothesis you naively seek to disprove, in the process of learning how the world actually is, according to the scientific method.)
As the OP notes, professional operators, generally, do quite well for themselves, in financial markets. Duh.
And, there’s a tension, between all the operations of a modern economy, which take financial market pricing as “objective” and “accurate”, and the ability of those operating the system, to make money. A financial crisis, a market crash, is a “tail” event in the normal distribution, usually assumed by those, who must take financial market pricing as “accurate”. But, thinking about it that way, isn’t useful as policy. A financial crisis or market crash is a breakdown in an institutional mechanism — financial market qua institution — like your car engine breaking down. Something has gone wrong, such that the financial market pricing can no longer be trusted to be “accurate” enough. Somebody forgot to take the car in, for an oil change or to replace a belt.
Bruce Wilder 11.08.12 at 9:31 pm
The one-off thing, about an election, remains an important distinction, I think. The forecast error is irreducible in polling to predict an election, as long as you are polling in advance or, even just apart, from the election. The forecast error, imho, is usually much, much smaller than observers and participants generally imagine, but that’s a much more complex argument. Polling works as well as it does, because the forecast error is so tiny.
For financial markets, forecast error is much larger for many critical events. But, financial markets generally cope really well with forecast error, and Taleb’s idea that sometimes they don’t is not well-supported by anecodote, let alone evidence.
ezra abrams 11.08.12 at 10:59 pm
don’t remember where I saw it, but the leverage on Wall street circa 2008 not esp high on historical trend.
What was different is that prior to ~1990 or so, wall street firms were partnersips – leverage was partners putting their money on the line
In 2008, firms were public, people were employees playing with other peoples money
explains a lot.
Barry 11.09.12 at 12:45 am
Ezra, this is Michael Lewis’ opinion, as well.
tax 11.09.12 at 9:39 am
“The housing bubble could and should have been popped; the internet bubble burst without any real bad effects.”
Really? The way that the U.S. handled the internet bubble burst was by encouraging the housing bubble.
tax 11.09.12 at 9:48 am
“The combination of an activist central bank and an activist fiscal policy is really very macroresilient. ”
What is meant by “macroresilient”? That more or less civilization doesn’t collapse? That there wasn’t mass starvation? I fear what it means is that the institutions survived, but by that criteria, the U.S. in the 19th century, without activist central banks or an activist fiscal policy, was macroresilient.
Metatone 11.09.12 at 11:44 am
@Dsquared – really like that you’ve pointed out that “stochastic” needs more examination. Dave Snowden of Cognitive Edge has been blogging interesting thoughts recently about defining that complexity is a state (in his 5 area Cynefin model) where you can’t use the stochastic techniques – and he’s had some interesting philosophical points along with it.
For me, the macro resilience stuff is interesting but impractical. It’s a description of how you would make the situation more like a perfect market – thus accruing the theoretical resilience of a perfect market. As I say, I think this is impractical because quite a lot of things (and outside banking this is clearer, e.g. transport, education, healthcare) are areas where regular failure just isn’t a tolerable approach for daily life. As such, better to look at the horrors of fiscal stimulus and flexible but pretty powerful regulators.
Hawkeye 11.09.12 at 12:56 pm
@Metatone
Not heard the name Dave Snowden in a while! Just had a quick look on the website for any materials but can’t find anything. Can you send a link to the specific blogs you refer to?
Thx
chris 11.09.12 at 1:41 pm
From my somewhat intuitive understanding of his method, I would venture that his simulations actually understate the probabilities by quite a lot, because they tend to exaggerate some sources of variation, by positing more independence of results between states than really exists. That is, he was guesstimating that Obama’s victory, given the information available, was equivalent to an ~80% probability (for a many repeated process, like his simulation). But, if he had approached his task more analytically, his simulation would have reflected more accurately the tendency of all the states to break together (to be correlated in their election outcomes), and his probability estimate might have been well above 95%. Sam Wang, at Princeton Election Consortium, who is far more analytic in his approach, is a good contrast to Nate Silver, in this regard, and I think he put very high estimates on Obama’s re-election.
I’m not sure if Silver v. Wang would be considered off-topic here, but I don’t think this is fair to Silver. He actually made considerable effort to account for the tendency of states to all vary from the polls in the same direction at once — by shortly before election time, IIRC, this phenomenon accounted for the majority of Romney simulated wins. As it turned out, the average of national polls *was* different from the outcome — but the actual outcome was slightly *more* pro-Obama than the polls, so the poll error turned out not to help Romney at all. (IIRC, it turned out that likely voter models were too strict again and assumed a richer, whiter, older electorate than actually showed up — Gallup’s in particular, but everyone’s to some extent.)
I don’t think one data point is sufficient to resolve the question of whether poll error could have been in Obama’s favor, or whether the effect could have been large enough that the polls slightly favoring Obama would have been misleading.
Silver’s estimate of the probability of the polls all being wrong in the same way was based on historical data from an era of sparser polls, because that’s the only historical data there was, so it’s certainly arguable that he was operating out of sample and the real chance of systematic poll bias is lower now than in the historical data. But that sounds to me suspiciously like the Great Moderation predictions that depressions could never occur again, and as we now know, they could and did. (For much more on this sort of thing, I recommend Silver’s book.)
Before the last week or so, the main chance of Romney wins in Silver’s model was an event that caused people to genuinely shift toward Romney — as you may recall, there actually was one, but it wasn’t big enough and there wasn’t another. I don’t think you can really say with any confidence that there *couldn’t* have been another event that caused a Romney-ward poll shift.
Eventually Silver’s model took into account the fact that there hadn’t actually been such an event (and in fact the last few weeks showed some pro-Obama movement) and it became more certain that Obama would hold his lead until Election Day. I think his eve-of-Election-Day percentage actually was over 90%, because by then it was clear that there was no remaining time for Romney or events to generate a real movement of opinion.
I don’t think Wang accounted at all for the fact that polls are now, the election is later, even several months before the election (although I didn’t follow his estimates or methodology that closely); I would characterize that as Wang being too certain rather than Silver being too timid. Some elections actually are eventful and the fact that in this one, the same candidate held a lead the whole time and ended up winning does not mean that nobody can ever come from behind or that it is right to rate the probability of doing so <5%.
chris 11.09.12 at 1:44 pm
another event that caused a Romney-ward poll shift.
Sorry, I meant a Romney-ward *opinion* shift. It takes a lot of mental discipline to avoid confusing the measurement with the thing being measured when you don’t have any other means of measurement, but the map really is not the territory and that’s one of the things that makes predictions less certain than they naively seem.
Martin Brock 11.11.12 at 12:19 am
I wouldn’t say that the tail of a Gaussian is as problematic as the tail of a Cauchy distribution, and I wouldn’t say that model uncertainty is necessarily a problem if we can assume that models are well behaved (in the sense of the Central Limit Theorem), but you’re right that Taleb’s Black Swan is not an occurrence in the tail of a fat tailed distribution. He makes this point himself.
As you say, a Black Swan is an occurrence that can’t be modeled at all, because we have no knowledge of it. We can’t assume that distributions are well behaved, but that’s not Taleb’s point. What was the likelihood of finding a (literal) black swan before anyone found one, in the absence of a reliable genetic theory and knowledge the swan’s genome say? This question has no answer, and no amount of modeling, absent some relevant information, sheds any light. Shooting in the dark is shooting in the dark, no matter how the targets are distributed.
On the other hand, we shouldn’t dismiss probability theory too lightly. If we have good reason to believe that a distribution of yields has a fat tail, then we know that common assumptions behind financial planning, like the assumption that diversification lowers risk, are not valid, and we can have good reason to believe so. Simple, “random walk” models of capital price formation generate distributions with increasingly fat tails.
Reality is complex, and we know much less than we often assume, but don’t need to throw the mathematicians out with the bath water. Any competent mathematician can tell you that financial models make dubious assumptions. The problem is that financial analysts don’t much care. If I tell them that a model is flawed, they tell me to fix it. If I tell them that I can’t fix it, they find another mathematician.
The Raven 11.11.12 at 12:20 am
Now, how do we convert these solutions into the institutions of a democratic government. It sounds like we need people of the stature of the founders of modern democracy. Where do we find them and how do arrange to implement their ideas?
Comments on this entry are closed.