by on January 26, 2005

A couple of weeks ago the DfES released its annual school league tables (don’t switch off, American readers, this matters to you). The tables have a new component: a ‘value added’ score, which is supposed to show not how well the children performed but how well the school taught them. The presentation of the league tables in the papers I saw stayed with the ‘raw score’ ranking, but included information about the ‘value-added’: presumably some papers did it the other way round.

The ‘value-added’ score is represented by a single number. This, in itself, makes it difficult to understand. The statisticians have devised a way of weighting vocational and academic achievements against each other, and any such weighting is going to be subject to dispute. The use of a single number also masks within-school inequalities: a school with a lousy Maths department can still get a good score if it has an excellent English department. But for an individual parent choosing a school this difference might matter a great deal: my own daughter’s writing and reading skills will develop fine if she is taught English by gibbons; her math and science skills need teaching by a good teacher.

There are also more technical reasons for being skeptical about the scores. Harvey Goldstein of the Institute of Education, the world’s expert on value-added evaluation of schools, has finally posted his commentary on the new tables, and it is (like all his commentaries) essential reading. The central problem, as I read it, is the issue of pupil mobility—we know that the extent of pupil mobility affects learning, but we don’t know how much, and furthermore we have no way of evaluating the extent to which schools themselves are responsible for pupil mobility. The DfES tables simply ignore the problem.

Why does this matter for Americans?

No Child Left Behind requires participating states to evaluate school performance. Most States have adopted raw-score methods of evaluation, but not all have—Ohio, I know, is pursuing a crude version of value-added. The pressure for value-added is increasing, and my guess is that more and more States will adopt it. On the one hand, with bigger schools and less scope for school choice, the evaluations should be more accurate than for British schools. On the other hand, schools have children for less time (typically they attend 3 stages of school, rather than 2), funding levels are much more diverse, and (I assume that) student mobility is greater (just because Americans move more). The UK experience is valuable for American school officials and researchers—and Goldstein’s commentaries are essential to understanding the UK experience (IMHO).

1

Darren 01.26.05 at 3:49 pm

The article skirts the issue of state vs private schools. For background reading try Tooley. With articles such as “Private education: the poor’s best chance?“, I’m sure the readers of this blog will be enthralled but not necessarily educated.

2

harry 01.26.05 at 3:54 pm

Darren, I know James’s work well, and have written a thorough response to the ideas in that piece, published in the most recent issue of Journal Of Philosophy of Education. I can, in fact, email you a PDF file if you’re interested, and will summarise the part of it that responds specifically to his comments on India and Geeta Kingdon’s findings in a future post. Thanks for the idea!!

3

Brad DeLong 01.26.05 at 7:52 pm

Curiously enough, it is my son’s writing and reading (and science) skills [that] will develop fine if he is taught English by gibbons (in fact, in the sixth grade he was); it is his math skills that need teaching by a good teacher…

4

JRoth 01.26.05 at 8:47 pm

All of you in school districts with gibbon-led English departments really need to consider relocating….

5

Xavier 01.26.05 at 8:49 pm

I would think that student mobility would actually be helpful in evaluating schools. Say you have a group of students that all attend the same elementary school and then all attend the same middle school. When they get to middle school their test scores begin to drop. That suggests that the middle school isn’t as good as the elementary school, but it doesn’t tell us anything about how those two schools relate to any other schools. You might have a decent middle school and an excellent elementary school or a decent elementary school and a terrible middle school. The measurement procedures might be a little more complicated when students move around a lot, but I think the end result would be more informative.

This might be indicative of a bigger problem. If students in one school on average do no better or worse than in their previous school, does the algorithm take that to mean that the new school is just as good as the old school or does it mean that the new school is of average quality? It should be the former, but from what I’ve read I think it’s the latter.

That would be a big problem. If a district has consistently good schools K-12, it will look like the district has good elementary schools, but only average middle and high schools.

6

JK 01.27.05 at 10:17 am

Obviously we need different measures for different purposes.

I do worry that there is potential here to disguise inequality. If schools in poorer areas are adding just as much “value” we shouldn’t kid ourselves that the pupils are getting just as good an education.

7

JennyD 01.29.05 at 3:05 pm

Value-added is a terrific way to evaluate schools. I have had educators in the richest schools tell me that they do absolutely nothing to add to kid’s education–that it all happens at home. To which I say, then why are you taking in \$10,000 per kid in tax dollars…for babysitting?

In disadvantaged schools, with challenged populations, value-added removes the ugly broad hurdle of AYP as a simple standard, and allows the improvement to be measured against the starting point.

Comments on this entry are closed.