Larry Elliott (the Guardian’s economics editor) is in my view right to say that a lot of modern macroeconomics has gone off the rails pretty badly and that most general equilibrium models are a tragic waste of time. But I think he (and most other similar critics of excessive maths in economics) really badly misidentifies the nature of the problem, and his choice of an example of a worthless piece of mathematical formalism is quite unfortunate and unfair. Let’s see if I can explain what “Generalised non-parametric deconvolution with an application to earnings dynamics” is, and why someone might care about it.
The most important thing to note is that the paper that Larry lampoons as an example of mathematical modelling divorced from the real world, isn’t actually an economic model at all. It’s a contribution to the statistical toolkit, a method for helping you analyse the data. It’s probably best understood by looking at the individual words.
“Deconvolution”. If you have two random variables, and you add them together, then the probability distribution of the sum is the convolution of the distributions of the two variables. “Deconvolution”, then, is the problem of extracting the individual distributions when all you have got is the sum. In the context of this paper, the “application to earnings dynamics” means that some changes to a person’s earnings are transitory (overtime, say), and some are permanent or nearly so (promotion or redundancy), and that it’s often interesting to try to take a time series of wages and separate it into transitory and permanent components.
“Non-parametric”. The easy way to do deconvolution is to make simplifying assumptions about the kind of probability distribution that your two variables have. For example, if you assume that your two variables are normally distributed, then all you have to think about is the mean and variance of each distribution – four parameters in all. It’s then pretty easy to write a computer program that just iterates through possible combinations of those four parameters, until you get the combination that has the best fit to your data.
However, the general practice of assuming that everything is normally distributed has gone right out of fashion in economics, in large part because of the crisis and in smaller, but still highly significant part because of Nassim Nicholas Taleb’s book “The Black Swan”. If you are going to try and estimate without making any up front assumptions about the shape of your distributions, then that’s non-parametric statistics.
“Generalised”. In the econometrics literature, this almost always means, as it does in this case, that the author has provided a solution which works for problems witth lots of variables. It’s often very easy to find a method that works for cases of a single target and a single explanatory variable, but for practical use, you need one that can be generalised.
So, generalised non-parametric deconvolution is a method of separating out the different components of an aggregate series, without making assumptions about their probability distribution and when there might be lots of hidden factors. It isn’t, frankly, a massive work of economics that deserves to win the authors a place on the rostrum next to Keynes and Ricardo, but it is (or at least it appears to me; I haven’t checked the maths) a perfectly sensible and workmanlike contribution to the toolkit for understanding some quite important problems. And the “application to earnings dynamics” shows that it can be put to use. The mathematical section of the paper is as horrific as Larry says, full of Fourier transforms and iterated integrals (although really, did we think that a complicated signal extraction problem like this was going to be easy?). But on page 33 of the working paper version I linked above, after all the maths is done, we get the following perfectly clear piece of academic English:
” Predicting transitory and permanent shocks for the individuals in the sample, we see that frequent job changers face more permanent and transitory earnings shocks than job stayers. This result has important consequences for welfare analysis. Savings and insurance should be very different if the risk of large deviations is much higher than is usually assumed with normal shocks.”
In other words (I said “clear academic English”, not “clear English”), they did indeed find that permanent shocks to earnings are not normally distributed, large permanent shocks are substantially more frequent for people who regularly change jobs than had been estimated under the old methods, and this matters quite a lot (in particular, although they don’t say this in the version I linked, it potentially changes the relative attractiveness of final salary versus defined contribution pension schemes for people who regularly change jobs). This sort of economics hasn’t lost touch with the real world in order to hide in a fantasyland of mathematics – it’s simply taking a grown-up approach to the fact that if you’re going to engage with the real world, then you need to minimise the number of a priori assumptions you’re making, and one consequence of making fewer unrealistic simplifying assumptions is that you’re going to have much more complicated calculations.
The problem with the high priests of mathematical ecnomics was not the fact that they used mathematics – it was that they allowed the ideology to drive the theory. The commitment to general equilibrium and to rational expectations was a founding belief, and all the rest of the modelling had to bend its way around that. Unsurprisingly, when you’ve got to make a model that can incorporate both the observable facts and a couple of key religious doctrines, the mathematics gets pretty complicated; Jesuit astronomers used to find the same kinds of problems. But you can’t blame the maths for that.
It is possible to sensibly argue, as Larry Elliott in fact does, that the whole business of building mathematical models is unworkable whatever the assumptions, and that economists should just go back to “thinking about the macroeconomy”. But even this is actually quite demanding in terms of the amount of mathematical sophistication required. John Maynard Keynes didn’t use many equations in the General Theory, but his theory assumed that there would be a workable system of national accounts to apply it to, and Keynes and Simon Kuznets spent a lot of time and effort creating such a system (and formalising the very concepts of things like GDP). Economists can’t even begin to think about the macroeconomy without reliable statistics to use, and statistical work has to be done by statisticians.
Even as straightforward a quantity as unemployment will often get surprisingly complicated by the time you’ve started to get into the detail of constructing labour force surveys and adjusting for “births and deaths” of new companies, while abstract quantities like “inequality”, or indeed the permanent and temporary components of the variation in earnings, can get horrifically technical. It’s true that a lot of mathematical economics is garbage – personally I tend to believe that nearly anything based on dynamic programming could be chucked out without loss – but the fact is that it’s a quantitative social science, and so it’s always going to be subject that has quite a lot of maths in it.
And quite apart from anything, there’s no shortage of economists who are taking the advice, putting down their sophisticated mathematical models and thinking about the economy, and making the most extraordinary asses of themselves as a result. Paul Krugman and Brad DeLong have been shouting themselves hoarse over the last six months, dealing with professional economists (including plenty of Nobel Laureates) who have been reviving a defunct economic fallacy called the “Treasury View”.
Named after the civil servants who maintained it in the face of Keynes in the 30s, this is the view that deficit spending can’t effect the total level of output because people adjust their private spending to offset the government stimulus. This view is demonstrably, clearly wrong, but you need to use a bit of mathematics to prove it so. The reason why these mathematical models came into use in the first place is that economics is a subject where it’s very easy to get confused, double count and otherwise make statements that are inconsistent with one another, and the requirement to make two sides of an equation balance helps to stop you from doing this.
It all reminds me of that old-fashioned journalistic cottage industry of the 1980s, where one used to take a passage of Derrida or Irigaray out of context, quote it in all of its jargonistic glory, pronounce it gibberish and move on to a fierce dismissal of “postmodernism” as meaningless. In a lot of these cases, the dismissal was warranted, but there was always a worrying feeling that the author’s inability to understand a technical piece in a specialist journal wasn’t actually the gold standard of meaning (or rather, that it was the gold standard – the gold standard was a bad monetary rule and this is a bad way to judge the value of academic disciplines). Just as there were good and bad literary theorists in 1980s France, there are good and bad economists now, and you can’t actually tell the good ones from the bad ones simply by looking at the equations they use.