by Ted on January 26, 2005
I was just looking around Tim Lambert’s Deltoid in a slow moment, when I came across this priceless story from BoffoBlog about a presentation from “More Guns, Less Crime” author John Lott. In this episode, the hapless AEI scholar gave a presentation arguing that elections have become more expensive because of growth in big government:
His evidence consisted of a correlation between growth in federal spending and growth in campaign spending, and from that he concluded that Big Government caused expensive campaigns. Two lines trending upwards, and he claims with perfect seriousness — and without performing any of the necessary tests — that the one causes the other. When we pressed him on his analysis, not only had he not performed any appropriate tests, but he seemed wholly unfamiliar with the relevant econometrics literature…
It made for a very uncomfortable ninety minutes. Afterward, we agreed that it was the worst presentation any of us had ever seen at the workshop, worse than any first-year grad student’s. Then, when he gained his notoriety it did not surprise me in the slightest that his other research turned out to be as shoddy as it was. When he continued to get backing by organizations like AEI in spite of the astonishingly poor quality of his work, it only confirmed my impression that the “idea factory” of the right is less concerned about the quality of those ideas than whether it can make the most noise.
by Harry on January 26, 2005
A couple of weeks ago the DfES released its annual school league tables (don’t switch off, American readers, this matters to you). The tables have a new component: a ‘value added’ score, which is supposed to show not how well the children performed but how well the school taught them. The presentation of the league tables in the papers I saw stayed with the ‘raw score’ ranking, but included information about the ‘value-added’: presumably some papers did it the other way round.
The ‘value-added’ score is represented by a single number. This, in itself, makes it difficult to understand. The statisticians have devised a way of weighting vocational and academic achievements against each other, and any such weighting is going to be subject to dispute. The use of a single number also masks within-school inequalities: a school with a lousy Maths department can still get a good score if it has an excellent English department. But for an individual parent choosing a school this difference might matter a great deal: my own daughter’s writing and reading skills will develop fine if she is taught English by gibbons; her math and science skills need teaching by a good teacher.
There are also more technical reasons for being skeptical about the scores. Harvey Goldstein of the Institute of Education, the world’s expert on value-added evaluation of schools, has finally posted his commentary on the new tables, and it is (like all his commentaries) essential reading. The central problem, as I read it, is the issue of pupil mobility — we know that the extent of pupil mobility affects learning, but we don’t know how much, and furthermore we have no way of evaluating the extent to which schools themselves are responsible for pupil mobility. The DfES tables simply ignore the problem.
Why does this matter for Americans?
[click to continue…]
by John Holbo on January 26, 2005