There’s a qualified defense of the recent NRC rankings of universities by the rather magnificently named duo of E. William Colglazier and Jeremiah P. Ostriker in the “Chronicle today”:http://chronicle.com/article/Counterpoint-Doctoral-Program/125005/?sid=at&utm_source=at&utm_medium=en
bq. Stephen Stigler, a professor from the University of Chicago, offered a thoughtful critique, saying that the project “was doomed from the start” when reputation was downplayed. Rankings based on reputation were provided in the 1982 and 1995 NRC reports, but they were excluded from the charge given to the committee for the 2010 report. We agree that reputationally based rankings contain important information, which was discussed in the report, but they also contain seriousness weaknesses.
bq. The decision not to include rankings based purely on reputation arose from three considerations. First, pure reputational rankings can contain “halo effects,” meaning a program’s ranking may be skewed by the university’s overall reputation, or it may lag behind because of its past reputation. Second, reputationally based rankings were not supported by many universities whose participation was needed to collect the data. And third, this study was intended to provide a comprehensive, updatable collection of data on characteristics associated with perceived program quality that would allow faculty, students, administrators, and other stakeholders to assess programs based on their own values, and thereby become a transparent tool to strengthen doctoral programs. The reputationally based rankings that received the most attention in previous reports did not provide a means to achieve this last important objective.
The NRC rankings have come in for a lot of methodological flak. But this at least seems to me to be a reasonable claim, and one that could indeed be strengthened a little further. One of the reasons that reputational rankings plausibly lag actual changes in departments’ quality is because they tend to reflect individuals’ internal models of what _others in the discipline_ think about a given department, rather than personal information that the individuals themselves have about a department’s actual quality. The fact that _x_ department is perceived as a flagship program counts for more than the fact that I know professors a, b and c at that department, and they don’t seem all that smart or productive. Public perceptions plausibly trump private information.
Cognitive science would suggest (or at least hint) that having a source of non-reputational information about departments’ actual quality rather than their perceived quality, even if it is rather noisy, could usefully help to unsettle these collective beliefs. It can make people more likely to reveal their private information, and to include that information (rather than their perception of what everyone else thinks) in their judgments as to which department is _actually_ good or in fact not as hot as its reputation warrants.
In short – rather than thinking about innate-quality-based rankings (however flawed) or reputational rankings (however biased toward the status quo) as simple alternatives, one can think of them as possible stages in an iterated search for better information. If the information provided by a quality-based ranking such as the NRC is really off-base (as perhaps it is with respect to some departments), one would expect that it will at most have a very moderate impact on people’s beliefs (as it will contradict _both_ their private information and their perception of the general wisdom of the field). If, instead, the information more accurately reflects the actual performance of a department, it will arguably have a greater impact on people’s judgments, since it will reinforce their confidence in their private information as opposed to the received wisdom.
I’ll note that I’m not an unbiased observer here. My department is one that has (according to the possibly-themselves-biased perceptions of those within it, including myself) performed less well than it should on reputational surveys, but “done quite well indeed”:http://www.themonkeycage.org/2010/09/the_nrc_rankings_of_political.html in the NRC ranking. But if I’m right (and if my colleagues are right too), one would expect that this will result in a significant bump in my department’s reputation when the next reputation-based survey happens. The NRC rankings will have jarred people’s perceptions a bit, so that they do not depreciate their own private knowledge so much. We’ll see what happens.
{ 11 comments }
ogmb 10.19.10 at 2:27 pm
They should scrap the computerized rankings and organize a playoff…
Oh wait, different kind of nonsense.
Jonathan Mayhew 10.19.10 at 3:10 pm
The consensus in my dept is that to the extent it helps us, it is a valid rating, but that if it doesn’t, it isn’t.
DN 10.19.10 at 3:35 pm
^That pretty much sums up the role of rankings in academia, except insofar as those external to Department (or within Departments) start adopting policies to conform to rankings metrics (the equivalent of “teaching to the test”). That’s what worries me, especially since we have no reason to believe some hypothetical future NRC system will use the same metrics used in this one.
Regardless, I don’t see the myriad problems with the NRC rankings as rooted in inadequate attention to “reputation.” They can both stink :-)
Hilary Kornblith 10.19.10 at 4:15 pm
The value of the NRC rankings varies across different fields. But there is a particularly important dimension to these assessments of humanities programs. In both the natural sciences and the social sciences, the quality of faculty is measured by way of citation data. In the humanities, the quality of faculty is measured by counting publications without using citation data at all. All articles count equally on this way of doing things, whether they are in the Southwestern Journal of Eastern North Dakota Studies or in the top journal in one’s field. All books count as the equal of five articles, whether the book is published with My Basement Press or with Cambridge. The quality of faculty research, which everyone agrees is one of the most important factors in rating the quality of graduate programs, is thus not measured at all in these assessments of the humanities. Unlike the assessment of science departments, where citation data clearly has some bearing on faculty quality, the assessment of humanities departments does not even make use of data which would be relevant in making such an assessment.
AcademicLurker 10.19.10 at 4:20 pm
In both the natural sciences and the social sciences, the quality of faculty is measured by way of citation data.
Actually, in the natural sciences the quality of faculty is measured by how much $$$ they bring in through grants and/or patents.
At least that’s my experience.
Tom Hurka 10.19.10 at 6:40 pm
Seconding what Hilary Kornblith said. I’ve been at too many meetings where humanists say citation data don’t accurately measure the quality of humanities scholarhip and then propose something that measures it far less accurately.
afinetheorem 10.19.10 at 10:48 pm
The bigger problem, of course, being that the data used is often utterly wrong, and is always out of date; at least in economics, three years means a lot of turnover. We also have some sort of “smell test” – Brandeis is better than Texas, Carnegie Mellon and ASU? Maryland is better than Northwestern? Chicago better than MIT? To the extent that this is meant to measure what the best graduate programs are, isn’t placement, or placement adjusted for some characteristics (are the students treated well, etc.) the best way to do this? I feel like such a ranking should take a competent RA no more than a month to complete…
david 10.20.10 at 12:58 pm
citation data must explain why Minsky’s not all that great.
David Hilbert 10.20.10 at 3:58 pm
I find the discussion of reputation rankings in the Chronicle article somewhat disingenuous since the NRC did construct a ranking that is largely driven by reputation. In addition to the s-ranking in which weights on the various components were derived from a survey of faculty opinion about the importance of various factors, they also published the r-ranking. The r-ranking is an attempt to estimate a model that uses the NRC data to predict department reputation where they used a partial reputation ranking derived from a survey of some, but not all, faculty in a field. The resulting model in my field, philosophy, is somewhat counterintuitive since the productivity variable that dominates the s-ranking is downplayed and the two faculty diversity measures get negative weight, so that having a more diverse faculty tends to lower your ranking. In any event, the NRC did construct a (partial) reputation ranking that they did not publish and did publish a ranking that is supposed to correlate with reputation. Philosophy is one of a number of fields in which the correlation between the two rankings is below .75. I think it’s best to ignore the whole thing but then my department did pretty badly so you should take that with an appropriate grain of salt.
Kbob 10.21.10 at 2:07 am
@4 & @5: Actually, social sciences (other than econ and arguably psych) are ticked because the NRC rankings of productivity in these fields don’t count books or citations to books. A bit ridiculous, really, considering books are still a major mode of knowledge dissemination in large swaths of anthro, soc, poli sci, linguistics, etc.
In my field, sociology, the decision to exclude books created some truly bizarre outcomes: the #1 ranked USNWR department — Berkeley, traditionally a “book department” — is in the mid-60s on the NRC faculty “research activity” dimension. Consequently, it’s in the high 30s on the S-rankings, which put more weight faculty pubs, grants, and citations than the R-rankings (and less on grad program size, math-only GREs, and faculty awards).
Conversely, the big winners were programs that specialize in almost exclusively article-based fields, particularly fields where the local norms are to publish a lot of incremental-knowledge papers and cite them exhaustively (e.g., medical soc, demography). The NRC “faculty research” rankings are telling us something about the discipline, but not what administrators and prospective graduate students (arguably the two most important audiences) think they do.
Norwegian Guy 10.25.10 at 7:15 pm
This is not the only university ranking that has come under critisism this autumn:
Universities in Norway and Denmark have raised concerns over the methods used in this year’s World University Rankings, published by the UK magazine Times Higher Education on 16 September.
http://www.researchresearch.com/index.php?option=com_news&template=rr_2col&view=article&articleId=996058
He [rector of the University of Oslo Ole Petter Ottersen] tells about the unusual procedures to gain insight into how the rankings are screwed together. ” Times Higher Education has denied us access to the material. I was then interviewed by the British media and expressed my criticism. I also read in British newspapers to learn more about how the ranking is made, “said the rector.
http://translate.google.com/translate?js=n&prev=_t&hl=en&ie=UTF-8&layout=2&eotf=1&sl=no&tl=en&u=http%3A%2F%2Fwww.tu.no%2Fjobb%2Farticle263159.ece
Comments on this entry are closed.