The idea that bad mathematical models used to evaluate investments are at least partially to blame for the financial crisis has plenty of appeal, and perhaps some validity, but it doesn’t justify a lot of the anti-intellectual responses we are seeing. That includes this NY Times headline In Modeling Risk, the Human Factor Was Left Out . What becomes clear from the story is that a model that left human factors out would have worked quite well. The elements of the required model are
(i) in the long run, house prices move in line with employment, incomes and migration patterns
(ii) if prices move more than 20 per cent out of line with long run value they will in due course fall at least 20 per cent
(iii) when this happens, large classes of financial assets will go into default either directly or because they are derived from assets that can’t pay out if house prices fall
It was not the disregard of human factors but the attempt to second-guess human behavioral responses to a period of rising prices, so as to reproduce the behavior of housing markets in the bubble period, that led many to disaster. A more naive version of the same error is to assume that particular observed behavior (say, not defaulting on home loans) will be sustained even when the conditions that made that behavior sensible no longer apply.
But at least this is criticism of specific models. What is really silly, on a par with saying “evolution is just a theory” is the currently popular talking point “this shows you shouldn’t trust models, so I can consult my own prejudices on topic X (most commonly, climate change)“. Any attempt to predict the future behavior of a system requires a model of that system, whether it’s explicit or implicit, complex or simple, solved with a computer or by assertion.
In the case of the bubble, the crucial determinant of model failure was not complexity or simplicity. It was the presence (or, for those who predicted a that the bubble would burst, absence) of the assumption “house prices always go up”. Of course, this assumption was much easier to detect from talking to an amateur speculator than in analyzing a synthetic CDO, but it had the same effect in either case.
More generally, in most cases, the headline result from a large and complex model can usually be reproduced with a much simpler model embodying the same key assumptions. If those assumptions are right (wrong) the model results will be the same. The extra detail usually serves to produce more detailed results rather than to produce significant changes in the headline results.