Sooooooo, this is a thing that happened …
And the moral is that if you’re in a mood where you’re likely to insult your favourite authors on Twitter, don’t count on them not finding out about it in this modern and interconnected world. I was clearly in an unusually difficult mood that day, as I also managed to piss of Steve Randy Waldman by describing his latest thesis on macroresilience as “occasionally letting a city block burn down in order to clear out the undergrowth”. As with the Taleb quip, I kind of stand behind the underlying point, but almost certainly wouldn’t have said the same thing to the guy’s face, so sorry Steve. In any case, by way of penance I will now write a few things about resilience and unpredictability. Starting with the point that I found “incredibly insightful” in the Taleb extract most recently posted.
The point I really liked was on p454 of the technical appendix (p8 of the .pdf file), which is something I ought to have realised myself during a couple of debates a few years ago about exactly what went wrong with the CDO structure. Translating from the mathematical language, I would characterise Taleb’s point as being that the problem with “fat tails” is not that they’re fat; it’s that they’re tails. Even when you’re dealing with a perfectly Gaussian or normal distribution, it’s difficult to say anything with any real confidence about the tails of the distribution, because you have much less data about the shape of the tails (because they’re tails) than about the centre and the region around the mean. So you end up estimating the parameters of your favourite probability distribution based on the mean (central tendency) and variance (spread) of the data you have, and hoping that the tails are going to be roughly in the right place.
But any little errors you make in estimation of the central tendency are going to get blown up to a significant extent when you start trying to use your estimate to try to say something about the 99th percentile of the same distribution. Which is kind of a problem since we have a whole financial regulatory infrastructure built up on “value at risk”, which is a term effectively meaning “trying to say something about the 99th percentile of an empirically estimated distribution.”
The deep point I see here is that it’s not worth getting worked up about “fat tails” specifically, or holding out much hope of being able to model financial (and other risks) better by changing one’s distribution assumptions. A little bit of model uncertainty in a normal context will do all the same damage as a massively fat-tailed underlying distribution. And the thing about model uncertainty is that it’s even more toxic to the estimation of correlations and joint probability distributions than it is to the higher percentiles of a single distribution. Even at this late stage, it really isn’t obvious whether the large movements in CDO values in 2007-9 were caused by a sudden shift in default correlation, a correlation that had been misestimated in the first place, or by an episode of model failure that looked correlated because it was the same model failing in every case.
The basic problem here is that in a wide variety of important cases, you just don’t know what size or shape the space of possible outcomes might have. At the root of it, this is the basis of my disagreement with SRW too – because we have so little reason to be confident at all in our ability to anticipate the kind of shocks that might arrive, I always tend to regard the project of designing a “resilient” financial system that can shrug off the slings and arrows as being more or less a waste of time. So, should we give up on any sort of planning?
Well … I think we can do a little better than that. And I think that the way we do it is by stepping away from stochastic thinking all together, which was the subject of my (and Alex Delaigue’s) argument with Taleb over the wine/cloth model of Ricardian trade. If you’re thinking about resilience as an activity rather than a structure, you don’t necessarily want to explicitly model stochastic factors in a model in which the parameters of interest are endogenous. Not because the world isn’t random, but precisely because it is.
Long time readers will recall that with respect to the “fat tails” debate, I occupy something of an extreme left-wing position. I don’t think that big stock market crashes or bond defaults are “tail events”, because I don’t think there’s any probability distribution for them to be in the tails of . They’re not draws from any underlying distribution. They’re individual historic events which have individual causes and effects. In so far as we understand them at all, then we understand them by their family resemblance to other things that have happened in the past, but this doesn’t at all mean that our way of reasoning about the likely future effects of (say) a combination of rising real estate prices, stagnant real incomes and increasing debt ought to be expressed in the form of a stochastic process.
How are they expressed then? Well, this was the root of (my side of) the debate over the Ricardian wine/cloth model, and in so far as I have one, this is my defence of economic theory. Because in the particular sphere of economics, our valid generalisations about the class of unrepeatable, individual historic events that we have to live through, are expressed as statements of macroeconomic theory. Specifically (and I do not propose to get into a big argument about this in the comments; my only real argument with either New Classicists or Austrian business cycle theorists is guys, you had your chance and you blew it), Keynesian macroeconomic theory.
I think people are underestimating quite how well-tested Keynesian theory is, by now, in the Popperian sense. It not only works, as shown in dozens of recession cases, it’s also seemingly the only thing that works, also demonstrated by many of the same cases. It explains why alternative solutions don’t work and nearly all of its bad effects exist only in hypotheticals and fairy stories. It doesn’t provide particularly accurate point estimates of future states of the economy’s stochastic process, but as suggested above, I don’t think that’s a reasonable thing to ask for. If this was a medicine, you’d have stopped giving placebo to the control group by now on grounds of pure ethics. Certainly, human behavioural and institutional regularities might change, but we can deal with them when they do. For the time being, Keynesian theory (and I emphasise theory here – Simon Wren-Lewis has changed my mind here on the value of empirics without theory, which would have given you completely the wrong steer in terms of guesstimating the fiscal policy multiplier) is a toolkit that works. We should be dealing with the future by predicting it based on the best theory we have, coupled with a sensible range of assumptions for the unpredictable course of events, and making policy in response to what we think is going to happen. Sometimes we will be wrong, sometimes we will be disastrously wrong, but I don’t think anyone thought that this was going to be easy. Resilience and robustness in economic arrangements aren’t basically an engineering concept – you guess the range of storms that might hit your levee, then build it for twice or three times the worst case. They’re much more like steering a ship into a difficult harbour with moving sandbanks; you react to conditions as they occur, based on training, understanding and common sense.
Which ends up being my reply to SRW too. We know how to build “macroresilience”. The combination of an activist central bank and an activist fiscal policy is really very macroresilient. The housing bubble could and should have been popped; the internet bubble burst without any real bad effects. Correct policy works. There’s no need to fiddle around trying to massively overhaul all sorts of economic institutions; we just need to work properly with the ones we have. And conversely, in an environment where the fiscal policy apparatus is in the hands of a wildly dysfunctional and sclerotic (USA) or fundamentally ideologically misguided (UK) political system, there’s not much else that can be done without fixing the big problem first.
 the concept of a “shift” in a parameter like correlation is on really quite dodgy epistemological ground and Taleb gives it another deserved kicking in the excerpt.
 Personal hobby horse alert – economists, decision theorists and moral philosophers are always helping themselves to probability distributions where there’s no very obvious reason to presume that they exist at all, let alone exist in the well-behaved and stable form that would be required for the “expected utility” of one or another course of action to exist. There’s just no reason to believe a priori that the set of possible outcomes can be put in one-to-one correspondence with the real numbers.
 Not even some mega-mega-mega hierarchical True Probability Distribution known only to God and Victor Niederhoffer. My slogan has always been that the Great Depression and the Russian Revolution did actually happen – they weren’t just particularly inaccurate observations of the true underlying economic growth rate.
 This is the difference between my background in financial markets and NNT’s. I don’t come from a derivatives background – I made my career in cash equities research. That’s a part of the industry which requires surprisingly little in the way of advanced mathematics, but a whole load of institutional detail and accumulated newsflow, a certain streetwise instinct for the way the world works, and a constant willingness to accept that if you’re only wrong half the time, you’re doing well. We deal in personalities, educated guesses and rules of thumb. Scientists tend to hate us, and genuinely rigorous financial economists often spend a surprising amount of their time proving to themselves that we add no value and are just meaningless journalists. On the other hand, cash equities brokerage is one of the only business lines in investment banking where people regularly set up in business using their own money.
 Because you put more money on when you’re right and take it off the table when it appears that, for whatever reason, you’re wrong.
 Or some synonym for “wrong” such as “early”, “seeing something the market doesn’t”, “contrarian”, “concentrating on fundamentals”, “aware of the bigger picture”, etc. Personally I’m never wrong. Sometimes a bit early, sometimes really quite a lot early but never wrong.
 Current state of science, by the way, suggests that good analysts do beat the market, on average, by a bit, and that one period’s performance is systematically maintained in future periods. But that all of the returns to this ability tend to accrue to the people who have it rather than to people who passively hand over their cash, don’t monitor it and expect to be handed an above-market return, preferably on a silver platter. This fact surprises and appals economists more than you’d think it might.