I’ve been looking at some data from the Philosophical Gourmet Report, a well-known and widely-used reputational survey of philosophers in the U.S., Canada, the U.K. and Australasia. The survey asks philosophers to rate the overall reputations of graduate programs as well as their strength in various subfields. The ratings are endogenous, in the sense that the philosophers who produce them are members of the departments that are being rated. This gives the survey some interesting relational properties and allows for an analysis of the social structure of reputation in the field. I’ve written a working paper that analyses the data from this perspective. It’s still in in pretty rough shape: there’s not much in the way of theory or a framing argument yet, and it’s way short on citations to the relevant literature. (So don’t be too harsh on it.)
I’m not sure whether philosophers will like the paper. On the one hand, they tend to think of themselves as sensible individuals guided by common-sense and rational argument. This makes them resist thinking of themselves in sociological terms, subject to the influences of context, social relations and role constraints. On the other hand, in my experience they have an insatiable capacity for gossip. Within the limits of the data, the paper addresses three questions:
The answers are, in short, “It depends on the field”, “Yes, but only sometimes, and then only for high-status specialists”, and “A great deal.” Some quick findings:
Over the fold are two visualizations of the field: the first is a blockmodel describing the relational structure of prestige amongst U.S. philosophy departments. The second is a segment plot showing the profile of departments across a range of different subfields. I think they’re both pretty cool, so read on.
A Blockmodel of Prestige
First, the blockmodel. You can get the full-size version of this graphic, or a higher-resolution PDF version, or the captioned version from the paper. This picture is a department by department matrix. Each colored cell represents the average “vote” by a department in the row for a department in the column. Departments are sorted in the same order in the rows and columns, according to an algorithm that groups them by how similar their voting patterns. The row and column numbers DO NOT correspond to the PGR rankings. (That is, Department 10 in the figure is not the 10th-ranked department.) Purple and blue cells represent high rankings; green cells represent middling rankings; brown and yellow cells represent low rankings. (The captioned version provides a reference scale.) The main diagonal is blank because departments are not allowed vote for themselves. In a high-consensus field, we’d expect each column to be the same color all the way down: that is, everyone agrees on how good a particular department is. In a low-consensus field, we’d expect more heterogeneity, with disagreement on the quality of particular programs. The data suggest that — at least according to the respondents to the PGR — philosophy is a very high-consensus discipline.
To help the interpretation, we can further group departments into “blocks” based on their similarity: members of the same blocks will stand in similar structural relations to other departments. In this case, I’ve generated a model with 5 blocks. Blocks are set off by thicker lines that project out into the margin. Block 1 is made up of the just first four departments, so the first four rows and columns show Block 1’s assessment of itself, for example. The four Block 1 departments enjoy the highest prestige and the greatest degree of consensus about their quality. Looking down the first four columns lets you see what everyone else thinks of Block 1 — almost everyone agrees they’re the best, as you can see by the almost unbroken strip of purple and dark blue. Looking across the first four rows lets you see what Block 1 thinks of everyone else. Thus, focusing on the intersections of the graph created by the thicker horizontal lines lets you see how different blocks relate to one another (and themselves). For instance, the bottom right corner of the figure shows what the lowest-status block, Block 5, thinks of itself, so to speak. It turns out that it agrees with everyone else’s assessment of its relatively low quality. In fact, as I show in the paper, Block 5 thinks a little better of Block 1 than Block 1 thinks of itself, and thinks a little worse of itself than Block 1 thinks of it. In other words, the lowest-prestige block is slightly more committed to the hierarchy than the highest-prestige block. Although the mean scores awarded to blocks varies across blocks, there is complete agreement on the rank-ordering of blocks. So, for example, there’s no dissenting group that thinks itself better than everyone else believes.
Departmental Strength in Specialist Areas
This is a segment plot. Again, you can get a larger version of it, or a much nicer PDF version. For each department, the wedges of the plot represent the department’s reputation rank in a particular subfield. The bigger the wedge, the better the reputation rank. A department that was ranked first in all areas would look like the key at the bottom. To simplify the presentation I’ve grouped metaphysics, philosophy of mind and philosophy of language into a single group, “MML.” (This has a substantive justification, as strength in these areas is highly correlated.) The distribution of segments gives a nice picture of a each department’s profile: what it’s known for. I’ve also ordered the subfields clockwise from the left, roughly in order of their contribution to the overall reputation of a department. You’ll notice that Princeton and Oxford are the only departments in the Top 15 or so to have a roughly symmetric “fan-like” structure, indicating strength in a wide variety of areas. By contrast, NYU is very strong in MML, Ethics and History, but not ancient or continental. Rutgers’ profile looks like a chambered nautilus: it’s very strong in MML and Epistemology, and gets progressively weaker as you move around the half-circle. Yet NYU and Rutgers outscore Oxford and Princeton in terms of overall reputation. This is because — as the paper shows — not all areas contribute equally to the status of a department. Strength in MML is more important than strength in, say, Ancient philosophy or (especially) Continental philosophy. The segment plot makes all the specialties have the same weight, but in reality this isn’t the case — so Oxford doesn’t capitalize on its strength in Ancient philosophy, for example. Michigan is an unusual case in that it ranks very highly despite lacking a strong reputation for MML and Epistemology. Conversely, strength in MML will only get you so far: MIT and the ANU excel in these areas, but probably won’t go any higher in the ratings without diversifying.
The plot shows some other features of departments and the field in general, too. Harvard’s relative weakness in metaphysics and mind is clear, for instance, as is Chicago’s strength in continental philosophy. As one moves down the rankings, the size of the wedges declines, of course, and departments with distinctive niches appear: the LSE is strong in the philosophy of science, Penn in modern history, Wisconsin in Science.
The data have a number of limitations, of course. For one thing, not all departments are present in the survey, and in most cases only one or two representatives of those departments were sampled. But it’s still a rich dataset. The draft paper has a fuller discussion of all this, together with a few other neat visualizations of the structure of the field. Comments are welcome, of course.
Update 2: Following a query from Tom Hurka, I discovered a small error in the segment plot. The original “History” measure mistakenly included Ancient history, which wasn’t my intention. That’s been fixed now, and the History measure is a department’s mean score in 17th Century, 18th Century and Kant/German Idealism.
There’s also a second question about interpreting the plots, especially if you’re looking closely at the profile of a particular department. The size of the wedges is not determined directly by the scores departments get in each area. First, they are scaled to have values between 0 and 1 in order for the plot wedges to be drawn. This rescaling can affect how departments appear. Imagine a department that scores the same in two subfields. If one subfield has a wider range of scores than another, however, a gap may open up in their position when this scaling takes place. This happened in some cases in the original segment plot. The range of observed scores for ethics is wider than for modern history, for example. The result is the relative position of some departments will differ from the former to the latter, even though their raw scores might be the same in both subfields. Originally I just left it at that, but to simplify things I’ve redone the segment diagram so that the size of the wedges is directly proportional to a department’s rank in that subfield. Bear in mind that ranks are calculated after being rounded to one decimal place, so ranks will often be tied.
Visually representing multivariate data is tricky, and this is an example of where a segment plot might be slightly confusing and the choice between using the raw means or the ranks isn’t entirely clear-cut. The ranks bring out the relative ordering of deparments but don’t convey how, for some fields, a few departments might be head and shoulders above the rest of the field. On the other hand, the ranks are easier to interpret than a “relative reputation” measure.
Your links http://http//www.kieranhealy.org/files/misc/segment-plot.png and http://www.kieranhealy.org/files/misc/segment-plot-sm.pdf are screwed up.
Whoops. Fixed now.
Hey Kieran—just some questions for clarification.
You write:
“Each colored cell represents the average “vote” by a department in the row for a department in the column.”
But Brian doesn’t have departments rate departments; instead, he asks individuals to rate departments. Of course, those individuals are all members of some department, so we could in principle say something like “the members of dept. Y who were asked by BL to rank dept. X gave it an average score of S.” But for the normal reader of the PGR, it is not possible to tell which individuals gave which department which rating.
So did Brian give you access to the raw individual rankings so that you could reassemble them into department/department data? Unless he gave you raw rankings either grouped by the rater’s name or at least the rater’s departmental affiliation, I don’t see how you could get where you got. (Though you sociologists may have cunning devices we mere philosophers know nothing of.)
Related question: suppose Brian tells me “there were 3 raters from NYU, and they on average they gave Harvard a 4.2”. That’s interesting enough, and in a sense it would justify your saying “NYU gave Harvard a 4.2”. But one way in which this data would always fall short of really being a rating of departments by departments, is that some departments have more members who were asked to rate—indeed, unsurprisingly, the members of the higher-prestige departments are over-represented in the ratings board. (Perhaps justifiably so—the fairness question is not my issue right now). So when we ask what NYU said about Indiana, and compare it to what Indiana said about NYU, we may be talking about seven individuals’ opinions at NYU vs. one or two at Indiana.
Anyhow—what lies behind your talk of “a department’s vote for a department”?
And, yes—this is interesting stuff you are presenting. I just wish it had more gossip in it.
Could you clarify what you mean by Continental Philosophy?
(I’m substracting continental types from the other categories, and I end up with phenomenology and, um, Sartre?)
I have rater-anonymized data (no names), but I know their departmental affiliations.
I aggregate individual votes up to the level of departments by taking the mean score of all the raters from the same department. This is necessary in order to make the blockmodel possible. So we go from a rater x department matrix to a department x department matrix. Of course, some departments have only 1 rater in them, so they basically “represent” the department. This is also why the blockmodel is presented anonymously, as obviously I don’t want it to be possible to work out the votes of any particular individual respondent.
Clearly, it would be best to have all the philosophers from every department rate all the departments, but I had to work with what I had.
Thanks, Kieran—I figured Brian must have given you pretty much that.
Isn’t the variability in the # of raters per department going to affect the smoothness and stability of the gradients working across a row? I.e., if I follow a row across, I am looking at a department’s rating of its competitors—or rather, the aggregate rating, by the raters in that department, of their department’s competitors.
If there are a lot of raters in that department, I would expect individual idiosyncrasies in rating to be evened out, and that department’s rating of its competitors to look similar to the whole profession’s rating of them (unless departmental group-think?). If there is only one rater in a department, I would expect idiosyncrasies to bubble up, i.e. the row might look anomalous compared to the rest of the plot.
E.g. department 38 looks anomalously generous across the board; department 46 looks anomalously harsh. Is this merely one softie and one crank? It is striking that, working vertically downward, nobody likes department 46, and only a few other departments seem to like department 38. This in turns suggests that they are smaller or less prestigious, which in turn suggests that fewer of their members were asked by BL to contribute ratings, which in turn may explain their idiosyncrasy.
Just more hypotheses to test. Oh—and when do we get to the gossip?
OK, I was lying about the gossip.
Interesting about the British raters being ‘substantially more generous’; that runs counter to what I would’ve expected. But then, I suppose that’s one of the very useful things about relying on hard, statistical data rather than intuition!
Also a bit surprising to find Oxford quite so strong in Continental Philosophy and History. The department here certainly has an excellent reputation, and excellent people, across the board — and my own knowledge of it is purely second-hand — but I do have a number of friends who are grad students working on post-Kantian Continental philosophy, and they’ve expressed surprise at the high rankings the department generally gets.
Which raises the question of what ratings done from surveys of grad students, as opposed to faculty, might look like, and how they’d compare to the Gourmet Report.
Oxford’s an interesting case. One of the main things it has going for it is sheer size: it’s got twice as many full-time faculty as Rutgers, for instance, and more than three and half times as many as Princeton or NYU. And that’s not counting people with part-time appointments.
Interesting stuff but am not sure why all this implies that philosphers are not “sensible individuals guided by common-sense and rational argument?”
am not sure why all this implies that philosphers are not “sensible individuals guided by common-sense and rational argument
Um, that was just a joke.
Hypothesis: Because there’s consensus about prestige in philosophy, philosophy departments are acutely sensitive to their place in the international scholarly pecking-order. Therefore, they will tend to make hires and put resources into activities that will build their reputations among other professional philosophers. They know how to obtain higher status in their discipline; and if they improve their scholarly reputation, then they can expect to enhance their standing on their own campuses. Thus philosophers need not pay much attention to serving or interacting with their colleagues in other departments, let alone their students. They may shun interdisciplinary research projects, ignore the narrowness of their curricular offerings, or avoid public service in order to invest in their strongest areas of pure research.
These pressures should be less evident in disciplines that have less consensus about status. If it turns out that historians (say) hold diverse and incompatible ideas about what makes a good history department, then they cannot easily ascend the status ladder. Instead, it makes sense for them to curry favor in their own universities in order to improve their position.
If this hypothesis turns out to be true, it would give an edge to your interesting finding. (My own observations bear it out.)
I think your paper is very interesting. However, I’m afraid that when you conclude that there is very high consensus about prestige in philosophy, you are not adequately taking into account the fact that the Gourmet Report deliberately excludes certain kinds of graduate departments, and certain kinds of philosophers.
Your results would be quite different, I think, if you took “Continental” departments like Penn State, the New School or SUNY Binghamton into account. There are quite a few more or less continentally or historically oriented Catholic departments, including Catholic U, Marquette and Duquesne. Indeed, I could list quite a few departments that have been excluded from the data you consider. (SIU Carbondale, DePaul, SUNY Stony Brook, Villanova, Emory, Memphis, Boston College, etc. etc.) IOW, it’s not just a few fringey folks that aren’t being taken into account in your data—it’s a large block of philosophers and graduate philosophy departments. I have little doubt that if these departments were given equal representation among those doing the rankings, your data would get a lot more messy. I would guess that your claim that there is a very high degree of consensus about prestige in philosophy would be pretty completely undermined.
I only read through your paper quickly, so perhaps you address this point, and I missed it. If so, I apologize.
This is a response to Peter Levine’s comment. Prof. Levine’s comment about the lack of interdisciplinary work by philosophers simply doesn’t reflect my experience in the profession. Of course, philosophers aren’t “interdisciplinary”, in the sense in which “interdisciplinarity” refers to a single discipline consisting of post-colonial theory, queer theory, etc. But I think philosophers are extremely interdisciplinary in the genuine sense of the term “interdisiciplinary”.
Instead of being interdisciplinary with the other humanities, philosophers tend to be interdisciplinary with science and social science departments. Think about your own department, at Maryland; Paul Pietroski is half in philosophy, half in linguistics. Michael Morreau is genuinely interdisciplinary (computer science, linguistics), as are Jeff Horty and Peter Carruthers (cognitive science). At Rutgers, many of the philosophers have multiple affiliations; I’m in the Cognitive Science department and have strong ties to linguistics, and I have colleagues with strong ties to physics and math. At Michigan, there was a working group in statistics that included members of the Michigan’s excellent math department and philosophers. Obviously, people like Allan Gibbard are distinguished economists as well as philosophers. The conferences I attend are pretty evenly divided between philosophers, linguists, and have a few computer scientists besides. Feminist philosophers tend to work closely with Women’s Studies Programs. And so on.
There are several reasons why the genuine interdisciplinarity of philosophy hasn’t been recognized. The first is almost Orwellian. The word “interdisciplinarity” has been co-opted by a group of theorists to refer to a single discipline, in a blatently political attempt to subsume a number of previously distinct subjects (e.g. comparative literature, English literature, anthropology) under one umbrella. Secondly, philosophers tend to be interdisciplinary with the sciences, and there is much in the humanities that regards anything involved with the sciences as anti-intellectual.
So whatever the explanation of our confluence of opinion, it simply cannot be that we are not interdisciplinary. We are the most interdisciplinary of the humanities.
However, I’m afraid that when you conclude that there is very high consensus about prestige in philosophy, you are not adequately taking into account the fact that the Gourmet Report deliberately excludes certain kinds of graduate departments, and certain kinds of philosophers.
I do discuss the limits of the sample in the paper, and address some of the points you make. It’s not a random sample of philosophers or a full population of departments. The sample gets restricted at several stages: (1) Which departments are selected for rating; (2) Which philosophers are asked to participate; (3) Which philosophers elect to respond; (4) Which departments respondents choose to evaluate.
As I say in the paper, it’s not clear how much the consensus in the data reflects consensus in the field and how much is an artifact of the composition of the sample.
You’re right that a broader sample of respondents and a wider selection of departments would change the data, though my sense is that it wouldn’t get “messier”: instead, a dissenting block would probably emerge, rating itself highly and other departments low, and vice versa. Most likely, this block would mainly hire amongst its own members, too.
Incidentally, a couple of the departments you list (Stony Brook, Penn State) are in the data.
I second jason stanley’s comments, but it does raise an interesting question, which is how to operationalise interdisciplinarity. I would operationalise it is folows: someone does interdisciplinary work if they publish their work in journals which are firmly within different disciplines. This is evidence that while they may be firmly planted in one discipline they have developed the interest and skills needed to communicate across disciplinary boundaries. It is also true that on that criterion a majority of my own colleagues in the Philosophy department at Madison are interdisciplinary: in fact, although I haven’t counted, I’d guess it is a large majority. I suspect that is not true of the departments whose members make a big deal about importance of interdicsiplinary work, furthermore, and I know it would surprise them to know this about us.
However, as someone who frequently publishes in journals within another discipline, I would say that I feel (and embrace) two points of pressure. One is to keep a significant proportion of my research firmly within Philosophy. The second is to keep a very large proportion of the teaching I do within the department firmly within the discipline. I find the second presure restrictive, but I endorse it as a view. I do not find the first pressure restrictive at all — I suspect that people who do not continue to work at least in part firmly within one discipline are not doing interdisciplinary work, but undisciplined work.
It’s not just that many of the departments “Patrick” mentions are included in the surveys, or have been included in the past. It also turns out that they score fairly poorly overall, though their strengths are reflected in the specialty rankings at various places. It’s also the case that many Continental philosophy folks are on the Advisory Board and fill out the surveys. Several of the leading departments in Continental philosophy (Riverside, Chicago, Syracuse, et. al.) are represented in the PGR. It is, I’m afraid, the “fringe” that is missing.
A response to Harry: I don’t agree at all. There are many journals which aim to be interdisciplinary, and therefore do not fit your criterion. I have published in quite a few of them (eg, Int J. of Law & Psychiatry; Philosophical Psychology; Phenomenology & the Cognitive Sciences). By your criterion, my work isn’t interdisciplinary - bc the journals I publish are too interdisciplinary!
I withdraw my comment about interdisciplinary work in the face of Jason Stanley’s good arguments and examples. Besides, I agree with Harry Brighouse that “interdisciplinarity” is not necessarily or intrinsically good; it can simply reflect a lack of grounding in any particular discipline. However, I’ll stick to two claims: 1) philosophers are especially sensitive to status rankings, and 2) philosophy departments often expend their limited resources to improve their status in the profession without paying as much attention to the breadth and balance of their curriculum. For instance, it is good for a university if there are courses available on medieval and early-modern philosophy. But a middle-ranked philosophy department gets no points for offering these courses—unless history of philosophy happens to be an area of comparative advantage. Such a department may prefer to hire heavily in subfields of research where it already has strengths. I hypothesize that these calculations are somewhat different in disciplines where there is less consensus about how to rank departments.
Kieran, you are probably right about these other schools forming a dissenting block, and as such my claim that their inclusion would make the data “messier” was perhaps wrong. However, the existence of such a significant dissenting block is all I really need to establish my point that you are probably mistaken in your claim that there is a high degree of agreement about prestige in philosophy.
Incidentally, it should be noted that I’m neither criticizing the Gourmet Report, nor arguing that the opinions of the members of the dissenting block are correct (or, for that matter, that they’re incorrect). I am merely arguing that there is not a high degree of agreement about prestige in philosophy. Perhaps there should be, but there isn’t. Whether you choose to call the large number of philosophers who don’t quite see eye to eye with the Gourmet Report evaluators “fringe” is perhaps a matter of style. But this “fringe” element is undeniably large.
Perhaps I should also say that I don’t know quite what Professor Leiter means when he says many of the schools I mentioned are or have in the past been included in the surveys. Sending out the faculty list for a “fringe” department to raters drawn almost exclusively from analytic programs is sure to get the “fringe” departments very bad ratings. And I suppose you could say that by sending out that list, you’ve included the department in the survey. Further, you could invite one token faculty member from Penn State to act as a rater, and thus include Penn State in the Report.
When I say that schools like Stony Brook and Penn State—as well as the (almost a) dozen other schools I mentioned, and I could name more—aren’t really included, I mean that the “fringe” aren’t really included in any meaningful way. Even Professor Leiter seems to admit that. (And he has reasons for making the decisions he’s made. Again, this isn’t about attacking Leiter or the Report.) If there is a token rater from Penn State, my guess is that that rater publishes in respectable analytic journals. My guess is that the analytically trained continental folks—the folks at Syracuse and Riverside and so forth—think of that person as doing good work. And that’s why that person was invited to participate. (I say it’s a guess because as I write the Gourmet Report site is apparently down, so I can’t scan through the list of raters.) And so the inclusion of these philosophers who have been unfortunate enough to land in fringe departments doesn’t really constitute a counterexample to my claim that these fringe departments are being excluded.
But again, I’m not interested in picking a fight with Brian Leiter about the way he runs the Report. Again, my point is very minor: I think it’s a mistake to say that there’s a high degree of agreement among philosophers about what counts as prestigious. That’s all.
Actually, the PGR reflects another consensus: a consensus as to what counts as an approach to philosophy worth taking seriously. The PGR is worth taking seriously if you share that view (which is, roughly, that clarity and rigor are necessary conditions for good philosophy). People sharing that consensus also have a high degree of agreement as to who does (this kind) of philosophy well. The differences in approach are radical enough that it doesn’t make much sense to ask of the PGR that it serves both communities. BTW, the distinction here doesn’t quite map onto the distinction between analytic and continental philosophy. It is a difference in approach, not in subject matter - Leiter himself is a Nietzsche specialist.
I must confess to some uncertainty about how terms are being used. What does it mean to say the evaluators include “analytically trained Continental folks” at places like Syracuse? Kenneth Baynes, an evaluator at Syracuse, has a PhD from Boston Univ, where he was supervised by Thomas McCarthy. Frederick Beiser, an evaluator at Syracuse, has a DPhil from Oxford, where he was supervised by Charles Taylor. Anyone who knows Beiser’s important work knows that he is highly critical of what he calls analytical history of philosophy. These are just two examples, but they could be multiplied. I do think, though, it is useful not to let false generalizations about “who” the evaluators are stand unchallenged.
Neil might be slightly closer to the mark in saying that “clarity and rigor” are elements of the “approach” to philosophy that are valued…except for the fact that many philosophers who would be regarded as “analytic” (e.g., John McDowell, Christopher Peacocke) are not clear, and some others (e.g., Martha Nussbaum on one end, Kit Fine on the other) are on a spectrum of “rigor” that makes it rather hard to say what the requirement of “rigor” means.
Part of the difficulty in these discussions is that people continue to talk as though something called “analytic philosophy” exists, when it doesn’t. I’ve written about this on the blog, and in the introduction to The Future for Philosophy (OUP, 2004), for anyone who might be interested.
I’d second (third?) most of Jason Stanley’s remarks on interdisciplinarity, except that I think he draws his conclusions too narrowly (perhaps just generalizing from experience, which is fine.) There are quite a few philosophers who interact with more “humanistic” fields quite fruitfully- many scholars of Ancient philosophy work and interact with clasicists who’s work can only be thought of as “humanistic”. (This certainly happens at Penn.) Paul Guyer (and others in the past) have had cross appointments with the graduate group in German language and literature, and work with professors and grad students from that program. Garry Hatfield is a member of a “visual studies” undergraduate program that brings together both cognitive science folks and members of the visual arts programs. There are many other examples, too tedious to list. This isn’t to take away from anything Professor Stanley says, merely to point out that there is also a pretty fair amount of interdisciplinarity with more “humanistic” fields as well.
As surprised as I am to hear myself say it, I am in some ways sympathetic to “patrick“‘s point. I am as a big a fan as anyone of the PGR, and I think that most of the judgment calls Brian makes in putting it together are pretty sensible.
However, from a purely sociological point of view, those judgments about who to include and who not to include are problematic. Why the UK, Australia, Canada and the US? Why not France, Germany, Puerto Rico, the Netherlands, Mexico, etc.? Why only have “research active” evaluators at major institutions? As Kieran notes, the survey is “endogenous” and basically excludes the opinions of the vast majority of even “analytic” philosophers. And even though the consensus would probably remain pretty high (indeed the PGR itself is bound to have a feedback effect in this regard), I suspect that you would start introducing a bit more messiness into the results if you included the opinions of us blue-collar philosophers.
It seems to me that, no matter how you slice it, you are looking at the opinions of a pretty small, pretty tightly connected group of folks. In the end, it is not really that surprising that you get high consensus, is it?
Marc,
Some years ago Leiter addressed some of the concerns you mention, particularly about programs outside of the English-Speaking world. He was addressing in particular why the report didn’t consider the “analytically oriented” programs in Norway, but I would think his answer would apply as much or more to programs in France, Germany, Mexico, etc.- The main idea was that he and the other likely evaluators just didn’t have enough information on the work being done in those schools or areas to judge the quality, and bringing in evaluators from those areas would likely have the same problem in reverse. So, any attepmt to expand the gourmet beyond the English speaking world was likely to bring more confusion than good. That seems a perfectly reasonable explination to me.
my hypothesis about consensus would be that pgr reviewers are heavily influenced by prior editions of the pgr and what we really have is a central source of information problem, which generates consensus through lack of diverse authoritative opinions. if there were more, diverse, rankings of departments, then i think you would find less consensus in philosophy. this can be tested sort of by using subsamples of the reviewer population next year and presenting them with alternative rankings, perhaps one based on publication records, participation in major conferences or something.
Jeremy has to be partially right. One of the enduring mysteries of the PGR is how certain departments survive off of past glory. Princeton, after suffering the losses of Lewis and Kripke, remains inexplicably ‘ranked’ ahead of Michigan and Pittsburgh, and Harvard’s ranking is far higher than the CVs of their faculty warrant. Jeremy’s hypothesis would explain this.
The amazing stability of rankings is one of the most common findings in studies of reputation in academia. Philosophy actually provides an interesting counterexample here, with the rapid rise of NYU to the top of the tree. I can’t think of another discipline where anything like that has happened in a 10-15 year period.
What seems to be happening in philosophy is that a department can rise dramatically in the rankings by making what are widely regarded as good hires, but departments in general never really decline more than one or two places in the rankings, no matter what changes occur in their faculty ranks.
I do wish people would be more cautious before making assertions of a factual nature. Harvard was the #1 department for most of the 20th-century in the US—by common consensus, and by prior National Research Council rankings through the 1960s. Now it hovers around 8th or 9th, a non-trivial change, esp. in the eyes of folks at Harvard! Berkeley was in the top 5-6 for much of the 1970s, 80s and 90s, but now has stabilized, it appears, just outside the top 10. There are many more examples of departments declining far more than a place or two in the rankings.
The idea that Princeton doesn’t deserve to be #3 without Kripke and Lewis, while apparently obvious to an anonymous poster on a blog site, was not shared by a couple of hundred evaluators asked to review a current faculty list. Could it be because the current faculty included Gilbert Harman, John Cooper, Daniel Garber, Mark Johnston, Michael Smith, Philip Pettit, and others? Perhaps.
As Kieran implies, the actual effect of the PGR has been to make possible dramatic shifts in traditional hierarchies, that are unheard of in other disciplines.
In addition to the schools Leiter notes, other departments that have had significant drops in their rankings, related quite obviously to faculty moves, include Indiana, Arizona, U.C. San Diego, Northwestern, and Johns Hopkins. Some of these drops have been bigger than others, but all have been notable. As he says, it would be good to make sure one’s factual statements are true before posting them.
Does “Science” include the philosophies of the special sciences that are ranked in the PGR (physics, biology, math)? If so, how are they weighed? Is logic included in the science ranking, or the MML ranking, or not at all?
Does “Science” include the philosophies of the special sciences that are ranked in the PGR (physics, biology, math)? If so, how are they weighed? Is logic included in the science ranking, or the MML ranking, or not at all?
“The idea that Princeton doesn’t deserve to be #3 without Kripke and Lewis, while apparently obvious to an anonymous poster on a blog site, was not shared by a couple of hundred evaluators asked to review a current faculty list. Could it be because the current faculty included Gilbert Harman, John Cooper, Daniel Garber, Mark Johnston, Michael Smith, Philip Pettit, and others? Perhaps.”
And Michigan has Allan Gibbard, Peter Railton, Larry Sklar, Edwin Curley, Elizabeth Anderson, Stephen Darwall, Kendall Walton, Richmond Thomason, and others. And Pittsburgh has John Earman, John McDowell, Robert Brandom, Gordon Belot, and others. PN’s point didn’t seem to be that Princeton doesn’t deserve to be a top 5 department. It seemed to be rather that it’s pretty clear that the CVs of the folks at Michigan and perhaps arguably Pittsburgh seem clearly to dominate the CVs of the regular faculty at Princeton, with Jeremy’s point explaining the distinction.
PN also seems wrong that drops of 1 or 2 are all that happens to a department that lose faculty. But revise the number given by PN to drops of 5 or 6. Harvard is ranked 6th in the latest PGR, not 8th or 9th. If you looked at the publication records of their faculty, they wouldn’t stack up well against the publication records of the faculty at some departments ranked 8-12 below them.
WA’s law of faculty evaluation: faculty member A happens to be hired by department B during its glory years. After the faculty responsible for B’s glory years retire or pass away or leave, A’s prior association with those people is worth an additional 10-12 distinguished publications on a CV.
What really needs to be done is supplant lists of faculty names with anonymous faculty CVs…People tend to assume that faculty members who are at (e.g.) Harvard have written numerous important papers or books. By asking people to evaluate CVs rather than names, one perhaps could get a better sense of which faculties have produced the most influential work.
But finally, isn’t it just plain silly to assume that philosophers are mutants who aren’t subject to the same social forces affecting all other disciplines? If so, they will still be affected by previous status heirarchies, no matter what the method used.
“The idea that Princeton doesn’t deserve to be #3 without Kripke and Lewis, while apparently obvious to an anonymous poster on a blog site, was not shared by a couple of hundred evaluators asked to review a current faculty list. Could it be because the current faculty included Gilbert Harman, John Cooper, Daniel Garber, Mark Johnston, Michael Smith, Philip Pettit, and others? Perhaps.”
And Michigan has Allan Gibbard, Peter Railton, Larry Sklar, Edwin Curley, Elizabeth Anderson, Stephen Darwall, Kendall Walton, Richmond Thomason, and others. And Pittsburgh has John Earman, John McDowell, Robert Brandom, Gordon Belot, and others. PN’s point didn’t seem to be that Princeton doesn’t deserve to be a top 5 department. It seemed to be rather that it’s pretty clear that the CVs of the folks at Michigan and perhaps arguably Pittsburgh seem clearly to dominate the CVs of the regular faculty at Princeton, with Jeremy’s point explaining the distinction.
PN also seems wrong that drops of 1 or 2 are all that happens to a department that lose faculty. But revise the number given by PN to drops of 5 or 6. Harvard is ranked 6th in the latest PGR, not 8th or 9th. If you looked at the publication records of their faculty, they wouldn’t stack up well against the publication records of the faculty at some departments ranked 8-12 below them.
WA’s law of faculty evaluation: faculty member A happens to be hired by department B during its glory years. After the faculty responsible for B’s glory years retire or pass away or leave, A’s prior association with those people is worth an additional 10-12 distinguished publications on a CV.
What really needs to be done is supplant lists of faculty names with anonymous faculty CVs…People tend to assume that faculty members who are at (e.g.) Harvard have written numerous important papers or books. By asking people to evaluate CVs rather than names, one perhaps could get a better sense of which faculties have produced the most influential work.
But finally, isn’t it just plain silly to assume that philosophers are mutants who aren’t subject to the same social forces affecting all other disciplines? If so, they will still be affected by previous status heirarchies, no matter what the method used.
Logic isn’t included in the figure. “Science” is general philosophy of science, and doesn’t include the specialist science rankings.
“it’s pretty clear that the CVs of the folks at Michigan and perhaps arguably Pittsburgh seem clearly to dominate the CVs of the regular faculty at Princeton…” It was obviously not clear to a couple hundred evaluators, who presumably know as much about the CVs as do anonymous posters.
Philosophers are clearly mutants, though perhaps not with respect to the impact of “social forces” upon them. But I wasn’t disputing that, I was just correcting some factual errors. (Harvard, by the way, ranks 6th by one method, 9th by another; it ranked 8th in 2002, by the method by which it ranks 9th this year.)
If the data doesn’t include some more specialized rankings, is there a worry that the conclusions about what fields are considered more important than others (in the sense of being more highly correlated with the general ranking) are skewed? Was there a principled reason you excluded them? What other fields aren’t included? Is political philosophy included in the ethics numbers? What about feminist philosophy, asian philosophy, etc?
The amazing stability of rankings is one of the most common findings in studies of reputation in academia. Philosophy actually provides an interesting counterexample here, with the rapid rise of NYU to the top of the tree. I can’t think of another discipline where anything like that has happened in a 10-15 year period.
An interesting test case might be Afro-American studies at Harvard in the early 90s, when a program that had been pretty much moribund hired several big names all at once (IIRC). I don’t know what impact that had on its reputation, or even if Afro-American Studies is cohesive enough as a discipline for this to count.
Earlier in this thread Peter Levine suggested that the high consensus in philosophy would lead philosophers to ignore interdisciplinary work in favour of work that would directly impress people inside the discipline. Several others criticized this, perhaps persuasively. But I wonder whether something similar might not be true.
Could the high consensus in philosophy not lead departments to want to hire and individuals to want to work on a comparatively few “hot” topics, maybe some interdisciplinary and some not, with the result that the kind of work hired for and done was less varied and original, at least with respect to topic, than it might otherwise be. (The split here is just hot/unhot, not disciplinary/interdisciplinary.)
In suggesting this, I make two assumptions. One is that to judge a department or individual highly one must judge that he is doing a) intellectually high-quality work or b) important subjects. So the consensus in overall judgements will rest in part on consensus on b), the important subjects. The other is that the consensus Kieran found in whole-department rankings will also be found in the specialty rankings, so there is consensus not only on what are the important specialties in philosophy, say MMS vs. continental, but also on what are the important subtopics in, say, metaphysics or ethics. But I’m not assuming that departments’ hiring and individuals’ research decisions are driven only by the desire for the prestige working on the hot topics can bring. They may genuinely share the judgements of importance. But the effect may still be the same: less variety and less originality in the topics people work on and hire for than if there were less consensus on what is important.
This suggestion resonates with a couple of things for me. One is Richard Rorty’s anecdote, in a recent London Review of Books, about a department that decided not to hire in the history of philosophy because it was more important to have someone contributing to the literature on vagueness. This anecdote may have been embellished in the telling, either to or by Rorty, but it does have the ring of truth to me. And if many departments look to hire in areas as specific (and currently hot) as vagueness, mightn’t the result be the loss of variety and originality mentioned above?
I also recall Kieran mentioning, in a post before Christmas, a sociological generalization to the effect that major intellectual breakthroughs usually don’t occur in the top institutions but are made by people working outside them, in more marginal locations. But if the top philosophy departments become more similar in their conceptions of what topics are important, and their consensus spreads to other departments, wont’t there be fewer marginal locations and less space for true innovations?
Finally, and this both is impressionistic and may reflect just middle-aged jaundice, but I’m often struck by how many PhDs coming out of the top departments in my field of ethics work on familiar and even overdiscussed topics, such as internal vs. external reasons (yawn, yawn), without this seeming to stop them from getting hired by other top departments. Maybe this has always gone on; certainly when I was a graduate student in Oxford in the 1970s everybody was writing on some aspect of Davidson. But it does seem that at many departments there is a strong and quite narrow sense of what the important topics are, so graduate students are trained and encouraged to write on those topics rather than to try to identify new ones. And that again lessens the chances that varied and original work, the kind that uncovers genuinely new issues, will get done.
There are good things and bad things about the philosophical profession as it now operates. The high degree of consensus Kieran found in the PGR highlights some of the good ones, such as the existence of comparatively objective standards, but it also may highlight some that, for reasons like those Peter Levine suggested, aren’t so wonderful. May there not be some respects in which philosophy would be more lively if there were less consensus within it?
Matt, just want to clarify that I largely agree with the restrictions wrt the PGR. The concern was with the use of those same restrictions to reach a conclusion about some property of “philosophers” (generic reading). I assume that all those excluded from the evaluation process are, in fact, philosophers. That is, continental philosophers are philosophers; non-research active philosophers are philosophers; and so on. So, to put Kieran’s results transparently, we might say: “There is a great deal of consensus about quality among research-active, anglo-saxon, analytic philosophers at major institutions.” And here we are looking at the opinions of a few hundred extremely tightly interconnected folks. Folks whose professional positions basically require that they more or less pay attention to only those within that self-same group.
One other point that worries me a bit about using the PGR data. Dave Chalmers asked on his blog about “important” works of the 20th C. in philosophy of mind (and there was a similar post on Certain Doubts in epistemology). In considering that question, it struck me that there are two readings of “important”: influence and quality. At least for me, the extensions of these two properties don’t overlap a great deal. And finally, I suspect that if I were on the PGR board I would rely more heavily on an influence of work (rather than quality of work) reading. If that is so, then Kieran’s results might be best construed as high consensus on “quality” of programs vis a vis the influence of their faculty(a much more objective, less contentious measure) than as consensus on “quality” of programs vis a vis the quality of their faculty. And it seems to me that that is not an insignificant difference.
I find it quite comical that Professor Leiter aims to detract from the merits of several posters’ claims simply based on their desire to remain anonymous. Not only is this a shameful logical fallacy, it is also irrefutable that his bias is that simple. I expect him to deny it, and I also expect reasonable people to recognize his denial for what it is. Leiter has made it quite public that he does not respect anonymous comments, even though those comments tend to be more forthright and also have sources that range from the admittedly childish to the dignified types who simply want to avoid his mudslinging ways.
I also find it fascinating that he disapproves of disagreements about the rankings since the rankings were based on the opinions of “a couple hundred evaluators”. Nevermind that they were hand-picked by those who are like-minded to Professor Leiter and that there are a couple hundred philosophers who have found Richard Heck’s letter worthy of their approval. Neither the number nor the identity of the philosophers who contribute does much to dissolve the challenges against them.
Lastly there are two crucial questions that Professor Leiter has yet to answer satisfactorily: a) What is the utility of overall rankings? b) What is the difference between a good continental philosopher and a poor one? Overall rankings muddle the individual nature of picking a graduate program by applying a weighted scale particular to certain tastes. Leiter tends to mention that an important part of going to grad school is placement record (thought that, itself, is a highly subjective claim) and that the overall rankings tend to reflect departments with strong placement records, but then couldn’t there just be rankings for placement records themselves that would allow prospective students to rank the departments based on their own criteria? Why must we only be provided with the current methodology and its antecedents? David Velleman has pointed out the gossipy nature of the PGR in the past, and I see no clearer cause for the existence of overall rankings than gossip.
Regarding my second question, I have often seen Professor Leiter regarded as a fine Nietzsche scholar (and, in fact, I agree with that claim), but if I wished to become a philosopher in the style of Nietzsche, I think it is plain to see that I would not at all want to study with Professor Leiter. There are other philosophy departments (not to mention other disciplines) that would probably be more likely to prepare me for the style and aproach involved with being a continental (to use the term boorishly) philosopher, not simply a scholar of continental philosophy. While I am aware that there are exceptions to this rule that Leiter will mention, there is much consensus in circles outside of his own who feel that those are the exceptions that prove the rule. This is not to say that I feel that these writers are good philosophers in the style of Nietzsche, but that neither is Professor Leiter, even though he is still a pretty damn good philosopher.
I find it quite comical that Professor Leiter aims to detract from the merits of several posters’ claims simply based on their desire to remain anonymous. Not only is this a shameful logical fallacy, it is also irrefutable that his bias is that simple. I expect him to deny it, and I also expect reasonable people to recognize his denial for what it is. Leiter has made it quite public that he does not respect anonymous comments, even though those comments tend to be more forthright and also have sources that range from the admittedly childish to the dignified types who simply want to avoid his mudslinging ways.
I also find it fascinating that he disapproves of disagreements about the rankings since the rankings were based on the opinions of “a couple hundred evaluators”. Nevermind that they were hand-picked by those who are like-minded to Professor Leiter and that there are a couple hundred philosophers who have found Richard Heck’s letter worthy of their approval. Neither the number nor the identity of the philosophers who contribute does much to dissolve the challenges against them.
Lastly there are two crucial questions that Professor Leiter has yet to answer satisfactorily: a) What is the utility of overall rankings? b) What is the difference between a good continental philosopher and a poor one? Overall rankings muddle the individual nature of picking a graduate program by applying a weighted scale particular to certain tastes. Leiter tends to mention that an important part of going to grad school is placement record (thought that, itself, is a highly subjective claim) and that the overall rankings tend to reflect departments with strong placement records, but then couldn’t there just be rankings for placement records themselves that would allow prospective students to rank the departments based on their own criteria? Why must we only be provided with the current methodology and its antecedents? David Velleman has pointed out the gossipy nature of the PGR in the past, and I see no clearer cause for the existence of overall rankings than gossip.
Regarding my second question, I have often seen Professor Leiter regarded as a fine Nietzsche scholar (and, in fact, I agree with that claim), but if I wished to become a philosopher in the style of Nietzsche, I think it is plain to see that I would not at all want to study with Professor Leiter. There are other philosophy departments (not to mention other disciplines) that would probably be more likely to prepare me for the style and aproach involved with being a continental (to use the term boorishly) philosopher, not simply a scholar of continental philosophy. While I am aware that there are exceptions to this rule that Leiter will mention, there is much consensus in circles outside of his own who feel that those are the exceptions that prove the rule. This is not to say that I feel that these writers are good philosophers in the style of Nietzsche, but that neither is Professor Leiter, even though he is still a pretty damn good philosopher.
I find it quite comical that Professor Leiter aims to detract from the merits of several posters’ claims simply based on their desire to remain anonymous. Not only is this a shameful logical fallacy, it is also irrefutable that his bias is that simple. I expect him to deny it, and I also expect reasonable people to recognize his denial for what it is. Leiter has made it quite public that he does not respect anonymous comments, even though those comments tend to be more forthright and also have sources that range from the admittedly childish to the dignified types who simply want to avoid his mudslinging ways.
I also find it fascinating that he disapproves of disagreements about the rankings since the rankings were based on the opinions of “a couple hundred evaluators”. Nevermind that they were hand-picked by those who are like-minded to Professor Leiter and that there are a couple hundred philosophers who have found Richard Heck’s letter worthy of their approval. Neither the number nor the identity of the philosophers who contribute does much to dissolve the challenges against them.
Lastly there are two crucial questions that Professor Leiter has yet to answer satisfactorily: a) What is the utility of overall rankings? b) What is the difference between a good continental philosopher and a poor one? Overall rankings muddle the individual nature of picking a graduate program by applying a weighted scale particular to certain tastes. Leiter tends to mention that an important part of going to grad school is placement record (thought that, itself, is a highly subjective claim) and that the overall rankings tend to reflect departments with strong placement records, but then couldn’t there just be rankings for placement records themselves that would allow prospective students to rank the departments based on their own criteria? Why must we only be provided with the current methodology and its antecedents? David Velleman has pointed out the gossipy nature of the PGR in the past, and I see no clearer cause for the existence of overall rankings than gossip.
Regarding my second question, I have often seen Professor Leiter regarded as a fine Nietzsche scholar (and, in fact, I agree with that claim), but if I wished to become a philosopher in the style of Nietzsche, I think it is plain to see that I would not at all want to study with Professor Leiter. There are other philosophy departments (not to mention other disciplines) that would probably be more likely to prepare me for the style and aproach involved with being a continental (to use the term boorishly) philosopher, not simply a scholar of continental philosophy. While I am aware that there are exceptions to this rule that Leiter will mention, there is much consensus in circles outside of his own who feel that those are the exceptions that prove the rule. This is not to say that I feel that these writers are good philosophers in the style of Nietzsche, but that neither is Professor Leiter, even though he is still a pretty damn good philosopher.
I find it quite comical that Professor Leiter aims to detract from the merits of several posters’ claims simply based on their desire to remain anonymous. Not only is this a shameful logical fallacy, it is also irrefutable that his bias is that simple. I expect him to deny it, and I also expect reasonable people to recognize his denial for what it is. Leiter has made it quite public that he does not respect anonymous comments, even though those comments tend to be more forthright and also have sources that range from the admittedly childish to the dignified types who simply want to avoid his mudslinging ways.
I also find it fascinating that he disapproves of disagreements about the rankings since the rankings were based on the opinions of “a couple hundred evaluators”. Nevermind that they were hand-picked by those who are like-minded to Professor Leiter and that there are a couple hundred philosophers who have found Richard Heck’s letter worthy of their approval. Neither the number nor the identity of the philosophers who contribute does much to dissolve the challenges against them.
Lastly there are two crucial questions that Professor Leiter has yet to answer satisfactorily: a) What is the utility of overall rankings? b) What is the difference between a good continental philosopher and a poor one? Overall rankings muddle the individual nature of picking a graduate program by applying a weighted scale particular to certain tastes. Leiter tends to mention that an important part of going to grad school is placement record (thought that, itself, is a highly subjective claim) and that the overall rankings tend to reflect departments with strong placement records, but then couldn’t there just be rankings for placement records themselves that would allow prospective students to rank the departments based on their own criteria? Why must we only be provided with the current methodology and its antecedents? David Velleman has pointed out the gossipy nature of the PGR in the past, and I see no clearer cause for the existence of overall rankings than gossip.
Regarding my second question, I have often seen Professor Leiter regarded as a fine Nietzsche scholar (and, in fact, I agree with that claim), but if I wished to become a philosopher in the style of Nietzsche, I think it is plain to see that I would not at all want to study with Professor Leiter. There are other philosophy departments (not to mention other disciplines) that would probably be more likely to prepare me for the style and aproach involved with being a continental (to use the term boorishly) philosopher, not simply a scholar of continental philosophy. While I am aware that there are exceptions to this rule that Leiter will mention, there is much consensus in circles outside of his own who feel that those are the exceptions that prove the rule. This is not to say that I feel that these writers are good philosophers in the style of Nietzsche, but that neither is Professor Leiter, even though he is still a pretty damn good philosopher.
I find it quite comical that Professor Leiter aims to detract from the merits of several posters’ claims simply based on their desire to remain anonymous. Not only is this a shameful logical fallacy, it is also irrefutable that his bias is that simple. I expect him to deny it, and I also expect reasonable people to recognize his denial for what it is. Leiter has made it quite public that he does not respect anonymous comments, even though those comments tend to be more forthright and also have sources that range from the admittedly childish to the dignified types who simply want to avoid his mudslinging ways.
I also find it fascinating that he disapproves of disagreements about the rankings since the rankings were based on the opinions of “a couple hundred evaluators”. Nevermind that they were hand-picked by those who are like-minded to Professor Leiter and that there are a couple hundred philosophers who have found Richard Heck’s letter worthy of their approval. Neither the number nor the identity of the philosophers who contribute does much to dissolve the challenges against them.
Lastly there are two crucial questions that Professor Leiter has yet to answer satisfactorily: a) What is the utility of overall rankings? b) What is the difference between a good continental philosopher and a poor one? Overall rankings muddle the individual nature of picking a graduate program by applying a weighted scale particular to certain tastes. Leiter tends to mention that an important part of going to grad school is placement record (thought that, itself, is a highly subjective claim) and that the overall rankings tend to reflect departments with strong placement records, but then couldn’t there just be rankings for placement records themselves that would allow prospective students to rank the departments based on their own criteria? Why must we only be provided with the current methodology and its antecedents? David Velleman has pointed out the gossipy nature of the PGR in the past, and I see no clearer cause for the existence of overall rankings than gossip.
Regarding my second question, I have often seen Professor Leiter regarded as a fine Nietzsche scholar (and, in fact, I agree with that claim), but if I wished to become a philosopher in the style of Nietzsche, I think it is plain to see that I would not at all want to study with Professor Leiter. There are other philosophy departments (not to mention other disciplines) that would probably be more likely to prepare me for the style and aproach involved with being a continental (to use the term boorishly) philosopher, not simply a scholar of continental philosophy. While I am aware that there are exceptions to this rule that Leiter will mention, there is much consensus in circles outside of his own who feel that those are the exceptions that prove the rule. This is not to say that I feel that these writers are good philosophers in the style of Nietzsche, but that neither is Professor Leiter, even though he is still a pretty damn good philosopher.
Sorry about the repeated posts.
À Gauche
Jeremy Alder
Amaravati
Anggarrgoon
Audhumlan Conspiracy
H.E. Baber
Philip Blosser
Paul Broderick
Matt Brown
Diana Buccafurni
Brandon Butler
Keith Burgess-Jackson
Certain Doubts
David Chalmers
Noam Chomsky
The Conservative Philosopher
Desert Landscapes
Denis Dutton
David Efird
Karl Elliott
David Estlund
Experimental Philosophy
Fake Barn County
Kai von Fintel
Russell Arben Fox
Garden of Forking Paths
Roger Gathman
Michael Green
Scott Hagaman
Helen Habermann
David Hildebrand
John Holbo
Christopher Grau
Jonathan Ichikawa
Tom Irish
Michelle Jenkins
Adam Kotsko
Barry Lam
Language Hat
Language Log
Christian Lee
Brian Leiter
Stephen Lenhart
Clayton Littlejohn
Roderick T. Long
Joshua Macy
Mad Grad
Jonathan Martin
Matthew McGrattan
Marc Moffett
Geoffrey Nunberg
Orange Philosophy
Philosophy Carnival
Philosophy, et cetera
Philosophy of Art
Douglas Portmore
Philosophy from the 617 (moribund)
Jeremy Pierce
Punishment Theory
Geoff Pynn
Timothy Quigley (moribund?)
Conor Roddy
Sappho's Breathing
Anders Schoubye
Wolfgang Schwartz
Scribo
Michael Sevel
Tom Stoneham (moribund)
Adam Swenson
Peter Suber
Eddie Thomas
Joe Ulatowski
Bruce Umbaugh
What is the name ...
Matt Weiner
Will Wilkinson
Jessica Wilson
Young Hegelian
Richard Zach
Psychology
Donyell Coleman
Deborah Frisch
Milt Rosenberg
Tom Stafford
Law
Ann Althouse
Stephen Bainbridge
Jack Balkin
Douglass A. Berman
Francesca Bignami
BlunkettWatch
Jack Bogdanski
Paul L. Caron
Conglomerate
Jeff Cooper
Disability Law
Displacement of Concepts
Wayne Eastman
Eric Fink
Victor Fleischer (on hiatus)
Peter Friedman
Michael Froomkin
Bernard Hibbitts
Walter Hutchens
InstaPundit
Andis Kaulins
Lawmeme
Edward Lee
Karl-Friedrich Lenz
Larry Lessig
Mirror of Justice
Eric Muller
Nathan Oman
Opinio Juris
John Palfrey
Ken Parish
Punishment Theory
Larry Ribstein
The Right Coast
D. Gordon Smith
Lawrence Solum
Peter Tillers
Transatlantic Assembly
Lawrence Velvel
David Wagner
Kim Weatherall
Yale Constitution Society
Tun Yin
History
Blogenspiel
Timothy Burke
Rebunk
Naomi Chana
Chapati Mystery
Cliopatria
Juan Cole
Cranky Professor
Greg Daly
James Davila
Sherman Dorn
Michael Drout
Frog in a Well
Frogs and Ravens
Early Modern Notes
Evan Garcia
George Mason History bloggers
Ghost in the Machine
Rebecca Goetz
Invisible Adjunct (inactive)
Jason Kuznicki
Konrad Mitchell Lawson
Danny Loss
Liberty and Power
Danny Loss
Ether MacAllum Stewart
Pam Mack
Heather Mathews
James Meadway
Medieval Studies
H.D. Miller
Caleb McDaniel
Marc Mulholland
Received Ideas
Renaissance Weblog
Nathaniel Robinson
Jacob Remes (moribund?)
Christopher Sheil
Red Ted
Time Travelling Is Easy
Brian Ulrich
Shana Worthen
Computers/media/communication
Lauren Andreacchi (moribund)
Eric Behrens
Joseph Bosco
Danah Boyd
David Brake
Collin Brooke
Maximilian Dornseif (moribund)
Jeff Erickson
Ed Felten
Lance Fortnow
Louise Ferguson
Anne Galloway
Jason Gallo
Josh Greenberg
Alex Halavais
Sariel Har-Peled
Tracy Kennedy
Tim Lambert
Liz Lawley
Michael O'Foghlu
Jose Luis Orihuela (moribund)
Alex Pang
Sebastian Paquet
Fernando Pereira
Pink Bunny of Battle
Ranting Professors
Jay Rosen
Ken Rufo
Douglas Rushkoff
Vika Safrin
Rob Schaap (Blogorrhoea)
Frank Schaap
Robert A. Stewart
Suresh Venkatasubramanian
Ray Trygstad
Jill Walker
Phil Windley
Siva Vaidahyanathan
Anthropology
Kerim Friedman
Alex Golub
Martijn de Koning
Nicholas Packwood
Geography
Stentor Danielson
Benjamin Heumann
Scott Whitlock
Education
Edward Bilodeau
Jenny D.
Richard Kahn
Progressive Teachers
Kelvin Thompson (defunct?)
Mark Byron
Business administration
Michael Watkins (moribund)
Literature, language, culture
Mike Arnzen
Brandon Barr
Michael Berube
The Blogora
Colin Brayton
John Bruce
Miriam Burstein
Chris Cagle
Jean Chu
Hans Coppens
Tyler Curtain
Cultural Revolution
Terry Dean
Joseph Duemer
Flaschenpost
Kathleen Fitzpatrick
Jonathan Goodwin
Rachael Groner
Alison Hale
Household Opera
Dennis Jerz
Jason Jones
Miriam Jones
Matthew Kirschenbaum
Steven Krause
Lilliputian Lilith
Catherine Liu
John Lovas
Gerald Lucas
Making Contact
Barry Mauer
Erin O'Connor
Print Culture
Clancy Ratcliff
Matthias Rip
A.G. Rud
Amardeep Singh
Steve Shaviro
Thanks ... Zombie
Vera Tobin
Chuck Tryon
University Diaries
Classics
Michael Hendry
David Meadows
Religion
AKM Adam
Ryan Overbey
Telford Work (moribund)
Library Science
Norma Bruce
Music
Kyle Gann
ionarts
Tim Rutherford-Johnson
Greg Sandow
Scott Spiegelberg
Biology/Medicine
Pradeep Atluri
Bloviator
Anthony Cox
Susan Ferrari (moribund)
Amy Greenwood
La Di Da
John M. Lynch
Charles Murtaugh (moribund)
Paul Z. Myers
Respectful of Otters
Josh Rosenau
Universal Acid
Amity Wilczek (moribund)
Theodore Wong (moribund)
Physics/Applied Physics
Trish Amuntrud
Sean Carroll
Jacques Distler
Stephen Hsu
Irascible Professor
Andrew Jaffe
Michael Nielsen
Chad Orzel
String Coffee Table
Math/Statistics
Dead Parrots
Andrew Gelman
Christopher Genovese
Moment, Linger on
Jason Rosenhouse
Vlorbik
Peter Woit
Complex Systems
Petter Holme
Luis Rocha
Cosma Shalizi
Bill Tozier
Chemistry
"Keneth Miles"
Engineering
Zack Amjal
Chris Hall
University Administration
Frank Admissions (moribund?)
Architecture/Urban development
City Comforts (urban planning)
Unfolio
Panchromatica
Earth Sciences
Our Take
Who Knows?
Bitch Ph.D.
Just Tenured
Playing School
Professor Goose
This Academic Life
Other sources of information
Arts and Letters Daily
Boston Review
Imprints
Political Theory Daily Review
Science and Technology Daily Review