Sooooooo, this is a thing that happened …
And the moral is that if you’re in a mood where you’re likely to insult your favourite authors on Twitter, don’t count on them not finding out about it in this modern and interconnected world. I was clearly in an unusually difficult mood that day, as I also managed to piss of Steve Randy Waldman by describing his latest thesis on macroresilience as “occasionally letting a city block burn down in order to clear out the undergrowth”. As with the Taleb quip, I kind of stand behind the underlying point, but almost certainly wouldn’t have said the same thing to the guy’s face, so sorry Steve. In any case, by way of penance I will now write a few things about resilience and unpredictability. Starting with the point that I found “incredibly insightful” in the Taleb extract most recently posted.
The point I really liked was on p454 of the technical appendix (p8 of the .pdf file), which is something I ought to have realised myself during a couple of debates a few years ago about exactly what went wrong with the CDO structure. Translating from the mathematical language, I would characterise Taleb’s point as being that the problem with “fat tails” is not that they’re fat; it’s that they’re tails. Even when you’re dealing with a perfectly Gaussian or normal distribution, it’s difficult to say anything with any real confidence about the tails of the distribution, because you have much less data about the shape of the tails (because they’re tails) than about the centre and the region around the mean. So you end up estimating the parameters of your favourite probability distribution based on the mean (central tendency) and variance (spread) of the data you have, and hoping that the tails are going to be roughly in the right place.
But any little errors you make in estimation of the central tendency are going to get blown up to a significant extent when you start trying to use your estimate to try to say something about the 99th percentile of the same distribution. Which is kind of a problem since we have a whole financial regulatory infrastructure built up on “value at risk”, which is a term effectively meaning “trying to say something about the 99th percentile of an empirically estimated distribution.”
The deep point I see here is that it’s not worth getting worked up about “fat tails” specifically, or holding out much hope of being able to model financial (and other risks) better by changing one’s distribution assumptions. A little bit of model uncertainty in a normal context will do all the same damage as a massively fat-tailed underlying distribution. And the thing about model uncertainty is that it’s even more toxic to the estimation of correlations and joint probability distributions than it is to the higher percentiles of a single distribution. Even at this late stage, it really isn’t obvious whether the large movements in CDO values in 2007-9 were caused by a sudden shift in default correlation[1], a correlation that had been misestimated in the first place, or by an episode of model failure that looked correlated because it was the same model failing in every case.
The basic problem here is that in a wide variety of important cases, you just don’t know what size or shape the space of possible outcomes might have. At the root of it, this is the basis of my disagreement with SRW too – because we have so little reason to be confident at all in our ability to anticipate the kind of shocks that might arrive, I always tend to regard the project of designing a “resilient” financial system that can shrug off the slings and arrows as being more or less a waste of time. So, should we give up on any sort of planning?
[click to continue…]